

Slack’s AI search now works throughout a company’s complete data base
Slack is introducing plenty of new AI-powered instruments to make staff collaboration simpler and extra intuitive.
“Right this moment, 60% of organizations are utilizing generative AI. However most nonetheless fall in need of its productiveness promise. We’re altering that by placing AI the place work already occurs — in your messages, your docs, your search — all designed to be intuitive, safe, and constructed for the best way groups truly work,” Slack wrote in a weblog submit.
The brand new enterprise search functionality will allow customers to look not simply in Slack, however any app that’s related to Slack. It may well search throughout programs of file like Salesforce or Confluence, file repositories like Google Drive or OneDrive, developer instruments like GitHub or Jira, and venture administration instruments like Asana.
“Enterprise search is about turning fragmented data into actionable insights, serving to you make faster, extra knowledgeable choices, with out leaving Slack,” the corporate defined.
The platform can be getting AI-generated channel recaps and thread summaries, serving to customers atone for conversations rapidly. It’s introducing AI-powered translations as properly to allow customers to learn and reply of their most popular language.
Anthropic’s Claude Code will get new analytics dashboard to offer insights into how groups are utilizing AI tooling
Anthropic has introduced the launch of a brand new analytics dashboard in Claude Code to provide improvement groups insights into how they’re utilizing the instrument.
It tracks metrics reminiscent of traces of code accepted, suggestion acceptance fee, whole person exercise over time, whole spend over time, common every day spend for every person, and common every day traces of code accepted for every person.
These metrics might help organizations perceive developer satisfaction with Claude Code options, monitor code era effectiveness, and establish alternatives for course of enhancements.
Mistral launches first voice mannequin
Voxtral is an open weight mannequin for speech understanding, that Mistral says provides “state-of-the-art accuracy and native semantic understanding within the open, at lower than half the value of comparable APIs. This makes high-quality speech intelligence accessible and controllable at scale.”
It is available in two mannequin sizes: a 24B model for production-scale functions and a 3B model for native deployments. Each sizes can be found beneath the Apache 2.0 license and could be accessed by way of Mistral’s API.
JFrog releases MCP server
The MCP server will enable customers to create and look at initiatives and repositories, get detailed vulnerability data from JFrog, and overview the elements in use at a company.
“The JFrog Platform delivers DevOps, Safety, MLOps, and IoT providers throughout your software program provide chain. Our new MCP Server enhances its accessibility, making it even simpler to combine into your workflows and the every day work of builders,” JFrog wrote in a weblog submit.
JetBrains broadcasts updates to its coding agent Junie
Junie is now totally built-in into GitHub, enabling asynchronous improvement with options reminiscent of the power to delegate a number of duties concurrently, the power to make fast fixes with out opening the IDE, staff collaboration instantly in GitHub, and seamless switching between the IDE and GitHub. Junie on GitHub is presently in an early entry program and solely helps JVM and PHP.
JetBrains additionally added assist for MCP to allow Junie to connect with exterior sources. Different new options embody 30% quicker job completion velocity and assist for distant improvement on macOS and Linux.
Gemini API will get first embedding mannequin
These kind of fashions generate embeddings for phrases, phrases, sentences, and code, to offer context-aware outcomes which can be extra correct than keyword-based approaches. “They effectively retrieve related data from data bases, represented by embeddings, that are then handed as further context within the enter immediate to language fashions, guiding it to generate extra knowledgeable and correct responses,” the Gemini docs say.
The embedding mannequin within the Gemini API helps over 100 languages and a 2048 enter token size. Will probably be provided by way of each free and paid tiers to allow builders to experiment with it free of charge after which scale up as wanted.