Samsung’s tiny AI model beats giant reasoning LLMs – AI News

Samsung’s latest breakthrough in AI showcases a small but highly efficient custom AI model that outperforms massive large language models (LLMs) in reasoning and performance. Their new model, called “Samsung Gauss,” is optimized for on-device deployment and demonstrates that size isn’t everything when it comes to machine learning model performance.

Key highlights from the article include:

  • Samsung’s compact model surpasses larger LLMs in reasoning tasks while consuming significantly fewer resources.
  • The model is designed for efficiency, ideal for smartphones and edge devices where computational power is limited.
  • By leveraging a focused and task-specific design, Samsung delivers faster inference and operational cost benefits.
  • This approach opens the door for integrating advanced AI into more everyday applications without relying on heavy cloud infrastructure.

For businesses, this innovation reinforces the value of using Holistic AI strategies — particularly custom AI models — to solve domain-specific problems with high efficiency. Companies in martech and customer engagement can benefit significantly by applying lightweight models that enable low-latency inference for personalization, churn prediction, and satisfaction analysis directly on client devices.

This not only improves marketing operations but enhances customer experience with more responsive, secure, and privacy-conscious AI systems. An AI consultancy or AI agency like HolistiCrm can help enterprises identify performance-critical workflows where such compact Machine Learning models create tangible business value — whether it's in mobile apps, IoT, or real-time analytics.

Read the original article: Samsung’s tiny AI model beats giant reasoning LLMs – AI News

Keeping European industry and science at the forefront of AI – commission.europa.eu

Europe is taking decisive action to stay at the global forefront of Artificial Intelligence by launching strategic initiatives that strengthen its AI ecosystem. The European Commission’s latest communication outlines key actions designed to foster innovation and competitiveness within the EU, particularly by ensuring robust investments, attracting and retaining digital talent, and supporting research and industry collaboration.

A few strategic pillars stand out:

  1. AI Investment Boost: The EU aims to raise at least €20 billion annually in AI investments to turbocharge innovation and commercial applications.

  2. European AI Factories and Testing Facilities: These infrastructures are intended to dramatically accelerate the development and testing of reliable AI systems by both SMEs and large enterprises.

  3. Regulatory Alignment and Governance: Europe’s AI governance framework, such as the AI Act, strives to balance innovation with trust and safety, giving companies a structured yet innovation-friendly playing field.

  4. Skills and Talent Strategy: By investing in education and mobility programs, the goal is to nurture a skilled workforce that can develop and deploy cutting-edge AI systems across sectors.

These actions create significant opportunities for AI-focused companies and martech players. For instance, a use-case could be designing custom AI models for B2B marketing automation, trained on localized industry and customer behavior data sets. HolistiCrm, an AI agency or AI consultancy, could develop Machine Learning models tailored for European industrial sectors to optimize customer satisfaction, personalize outreach content, and improve performance across sales pipelines—all while aligning with emerging regulatory standards.

By anchoring such solutions in the holistic approach Europe demands—ethical, secure, and high-performing—AI experts can create unmatched business value for customers through transformational operational and marketing efficiencies.

Original article: Keeping European industry and science at the forefront of AI

Google’s latest AI model uses a web browser like you do – The Verge

Google’s latest step in artificial intelligence innovation demonstrates how deeply the boundaries are blurring between human behavior and machine capability. In its new research paper, Google introduces a Machine Learning model capable of navigating a web browser autonomously, mimicking user actions such as clicking, scrolling, and filling forms. This behavior-based model, trained with reinforcement learning and imitation learning, shows promise in generalizing across web interfaces without requiring manual data formatting or API integration.

The key takeaway is this: instead of hardcoding solutions or relying on structured data, Google’s model learns usability patterns, delivering flexible and adaptable automation across websites. This marks a significant leap in what custom AI models can do, shifting from static knowledge retrieval to dynamic interaction with real-time online environments.

From a business perspective, this opens up new use-cases for martech, particularly in CRM systems and digital customer engagement. For example, a holistic CRM tool powered by such a model could autonomously gather and update customer information from a variety of online sources, orchestrate personalized outbound campaigns, or even simulate customer journeys for UX optimization — all enhancing marketing performance and increasing customer satisfaction.

Companies investing in AI consultancy or engaging an AI agency can harness these capabilities to lower manual overhead, improve data freshness in marketing databases, and accelerate response time in digital touchpoints. The key is deploying the right Machine Learning model with domain-specific fine-tuning, ensuring each model performs with intent aligned to business value.

This type of autonomy and generalization sets a new benchmark for marketing automation and martech solutions. More than just technological evolution, it signals an efficiency revolution led by behaviorally trained AI experts.

Read the original article here: Google’s latest AI model uses a web browser like you do – The Verge

DeepSeek claims its new AI model can cut the cost of predictions by 75% – here’s how – ZDNET

DeepSeek has unveiled a breakthrough in AI model efficiency with its new DeepSeek-V2 model, promising a 75% reduction in prediction costs. This leap stems from architectural changes in the model’s MoE (Mixture of Experts) design. Rather than activating all 64 experts per prediction, DeepSeek-V2 selectively uses just 2, cutting computational overhead while maintaining performance. Notably, the model utilizes 236 billion parameters in total, but only 21.8 billion are actively used during inference, striking a balance between efficiency and capability.

In a marketing and martech context, such efficiency gains can be transformative. Businesses currently using large Machine Learning models often face steep cloud costs when deploying custom AI models for real-time customer engagement or personalization. A model like DeepSeek-V2 allows companies to scale these operations while reducing infrastructure expenses significantly.

Holistically, this innovation enables faster deployment of AI features without compromising customer satisfaction or performance. For example, a retail company using AI to deliver dynamic pricing or individualized product recommendations can now drive those insights more frequently and at lower cost, leading to improved decision-making and higher conversion rates.

For AI consultancy and AI agency projects focused on optimizing digital marketing or CRM automation, integrating cost-efficient models like DeepSeek-V2 adds direct business value. It empowers customers to run high-performance models in production without escalating costs, enabling broader experimentation and refinement in martech strategies.

Original article: https://news.google.com/rss/articles/CBMisAFBVV95cUxQR05ra0FoVk9HN2hhc254eHRUYy1KZHJKZG1UcGRsV21BSmoyR0Y0ZTBYWnQ4VlpXWExzTDFfTW9hRnl0TVNIR3ZjSm9VVVhjaVktdGdpSldrOHpVVG5SazVKbGVTZ2VhLUhsUTZsSjhNNkRUY0NxUHR6dTBiOFpkekRRcU1xc2ZUX3d6SndzT3RVUHpGZkVqQXVhdDNaVDdJWjhPWk9MaEV1YUpJeWZyYg?oc=5

Meta Llama: Everything you need to know about the open generative AI model – TechCrunch

Meta’s latest release of the open generative AI model, Llama 3, signals a significant advancement in the AI landscape. The article explores how Llama 3 expands the frontier of open-source large language models (LLMs), empowering developers and businesses with robust capabilities similar to closed models like GPT-4. Available in both 8B and 70B parameter variants and integrated into Meta’s AI assistant, Llama 3 positions itself as a cost-effective and competitive tool in martech and enterprise applications.

Key takeaways from the article include:

  • Open and Customizable: Llama 3’s open nature allows businesses to build tailored Machine Learning models that align with their specific performance needs, enabling more refined control over outputs and data privacy.
  • Hardware Agnostic and Scalable: Llama 3 is designed to run efficiently on diverse infrastructure, avoiding lock-in to proprietary platforms — a key for scalable martech deployment.
  • Multilingual Support and Reasoning Enhancements: The improvements in reasoning and multilingual capability increase customer reach and satisfaction across global touchpoints.
  • Community-Driven Innovation: By embracing the open-source ecosystem, Llama 3 accelerates iteration and innovation while lowering the adoption barrier for smaller companies and AI consultancies.

A relevant use case would be integrating Llama 3 into a customer relationship management platform like HolistiCrm. A custom AI model built on Llama 3 could serve as a conversational agent, handling multilingual customer queries, generating personalized marketing copy, or analyzing sentiment across support channels in real-time. This not only enhances customer satisfaction but also boosts operational performance and marketing ROI.

Enterprises leveraging Llama 3 with the guidance of an AI agency or AI expert can unlock transformative value — from holistically decomposing customer journeys to developing AI-driven workflows that respond dynamically to user behavior.

Full article and details: original article