Let's discuss your project!

If you wish to contact us directly:

Check out our blog for more

April 16, 2024

Which GPUs are the most relevant for Computer Vision

In the field of CV, selecting the appropriate hardware can be tricky due to the variety of usable machine-learning models and their significantly different architectures. Today’s article explores the criteria for selecting the best GPU for computer vision, outlines the GPUs suited for different model types, and provides a performance comparison to guide engineers in making informed decisions.
April 12, 2024

Digest 19 | OpenCV AI Weekly Insights

Dive into the latest OpenCV AI Weekly Insights Digest for concise updates on computer vision and AI. Explore OpenCV's distribution for Android, iPhone LiDAR depth estimation, simplified GPT-2 model training by Andrej Karpathy, and Apple's ReALM system, promising enhanced AI interactions.
April 11, 2024

OpenCV For Android Distribution

The OpenCV.ai team, creators of the essential OpenCV library for computer vision, has launched version 4.9.0 in partnership with ARM Holdings. This update is a big step for Android developers, simplifying how OpenCV is used in Android apps and boosting performance on ARM devices.
April 4, 2024

Depth estimation Technology in Iphones

The article examines the iPhone's LiDAR technology, detailing its use in depth measurement for improved photography, augmented reality, and navigation. Through experiments, it highlights how LiDAR contributes to more engaging digital experiences by accurately mapping environments.
April 2, 2024

Digest 18 | OpenCV AI Weekly Insights

Discover the latest in AI: Hume AI's EVI introduces emotional intelligence to technology, OpenAI's Voice Engine offers realistic voice cloning with ethical safeguards, RadSplat pushes VR rendering speeds, and xAI's Grok-1.5 advances in understanding complex contexts.
March 21, 2024

How To Train a Neural Network with less GPU Memory: Reversible Residual Networks Review

Explore how reversible residual networks save GPU memory during neural network training. This technique, detailed in "The Reversible Residual Network: Backpropagation Without Storing Activations," allows for efficient training of larger models by not storing activations for backpropagation. Discover its application in reducing hardware requirements while maintaining accuracy in tasks like CIFAR and ImageNet classification.