embedUR

embedUR Accelerates Edge AI Development with Easy, Low-Cost Access Through Fusion Studio

embedUR Accelerates Edge AI Development with Easy, Low-Cost Access Through Fusion Studio

embedUR Accelerates Edge AI Development with Easy, Low-Cost Access Through Fusion Studio

Fusion Studio was designed to simplify the process of transitioning from model design to working prototypes. It brings together dataset preparation, retraining, benchmarking, and deployment in one environment. Engineers can test models on supported boards, measure performance in real time, and make improvements on their target hardware.

Built by embedUR systems, the platform reflects years of experience in embedded software and wireless engineering. It turns the usual chain of tools and manual steps into a single, continuous workflow that keeps developers closer to the hardware and closer to results.

Hardware-Aware Edge AI Development

Fusion Studio brings clarity to model deployment on embedded devices. Every stage, from dataset annotation to benchmarking, is tied to the behavior of real hardware, not abstract simulations. Engineers can retrain models, adjust parameters, and see in real time how those changes affect memory use, inference speed, or power consumption.

Pre-trained models from the ModelNova library can be adapted for testing on supported boards such as Raspberry Pi, using compilers and runtimes already built into the platform. This ensures that the results on the screen will match what will happen in the field.

The platform records comparative metrics automatically, so engineers can track trade-offs between model size, accuracy, and resource usage across iterations. Resources such as guided workshops and technical support are also available to help teams apply these insights efficiently.

Technical Workflow and Platform Features

Fusion Studio consolidates every stage of edge AI development into a single desktop environment, so engineers can move seamlessly from data preparation to deployment.

Dataset Preparation and Annotation

Datasets can be imported and labeled directly within Fusion Studio. The platform also has custom annotation tools that enable precise categorization, ensuring that models accurately reflect the specific characteristics of the target application.

Training and Retraining

Fusion Studio supports both pre-trained models from the ModelNova library. Training and retraining are handled efficiently on local resources, with optimized compilers and frameworks designed for edge processors. This enables engineers to iterate quickly, adjusting model parameters and architectures without the delays of transferring data or configuring remote compute instances. 

Benchmarking and Analysis

Engineers can evaluate inference latency, throughput, model size, and memory usage directly on supported boards. Visual dashboards present these metrics in an organized manner, allowing rapid comparison between models or configurations. 

Compilation and Deployment

Once a model is validated, Fusion Studio provides direct compilation for target hardware. Supported boards, including Raspberry Pi, receive fully packaged, deployable binaries. This step eliminates the traditional gap between development and production hardware, ensuring that what works in the platform translates reliably to the field. 

Modular and Extensible Workflow

Fusion Studio accommodates new boards, runtimes, and model types without disrupting existing workflows. Experiments can be saved, shared, and replicated, allowing teams to standardize evaluation practices, reduce repeated effort, and maintain consistency across their projects.

Benefits for Embedded AI Engineers

Faster Development: Supports the full workflow from dataset preparation to deployment, shortening the time to reach a minimum viable product.

Better Resource Optimization: Makes trade-offs between model size, accuracy, inference speed, and power consumption transparent, guiding informed design decisions.

Hardware-Ready Testing: Allows immediate evaluation on preconfigured boards without requiring specialized lab setups.

Informed Iteration: Dashboards and metrics enable rapid comparison between models and configurations, improving optimization strategies.

Consistent Workflow: Experiments can be saved, shared, and replicated, reducing duplicated effort and maintaining uniform practices across teams.

Applications Enabled by Fusion Studio

1. Vision-Based Prototypes

Developers can build embedded vision systems for tasks like object detection, household classification, or pose estimation. Pre-trained models from the ModelNova library, such as ResNet50 or MnasNet05, can be adapted with custom datasets and tested on hardware to optimize real-world performance for smart cameras or robotic vision.

2. Robotics and Automation Projects

Engineers can develop navigation, obstacle detection, and basic robotic control systems. Fusion Studio allows testing different model architectures and configurations to evaluate real-world responsiveness and resource usage, making iterative refinement faster and easier.

3. Audio and Interaction Systems

Voice commands, sound recognition, and interactive audio systems can be prototyped using models like Microspeech LSTM. Engineers can train models with their own recordings, test them in real conditions, and fine-tune them for embedded deployment.

Currently, Fusion Studio beta supports testing and deployment on Raspberry Pi boards and is limited to vision models for now. Future updates will extend support to additional hardware for more complex edge AI applications across various other domains.

Fusion Studio: Availability and Access

Fusion Studio is currently available through a beta program limited to vision-based edge AI projects. Participants gain access to the full desktop environment, preconfigured hardware platforms, training and benchmarking tools, and ModelNova’s library of pre-trained models.

Eligibility for the beta requires a clearly defined project scope, a dataset for retraining, and willingness to provide feedback on the platform. During the program, participants will receive support through dedicated technical managers, webinars, workshops, and a private discussion channel.

Full release of Fusion Studio is scheduled for Q1 2026. Planned capabilities include support for additional hardware platforms, audio and non-vision models, and expanded integration with user-provided models. This phased rollout ensures early participants can provide feedback while enabling the platform to scale to a broader range of edge AI applications.

Engineers and research teams interested in access can join the beta.