NATIX’s Decentralized Data Play Aims to Shatter the Autonomous Mapping Monopoly
Forget paying legacy players for stale map data—NATIX is turning every smartphone into a real-time mapping node. The startup’s blockchain-powered approach crowdsources fresh geodata while cutting out the middlemen who’ve been gatekeeping location services for decades.
How it works: Users earn crypto tokens for contributing anonymized sensor data from their devices. No more waiting for corporate fleets to slowly remap your neighborhood—this network updates constantly, with accuracy that scales as more participants join.
The kicker? Traditional mapping giants charge automakers up to $25 per car monthly for data NATIX could provide at a fraction of the cost. Another legacy cash cow ripe for disruption—assuming regulators don’t strangle it in the crib to protect their buddies in Big Geo.
VX360 and decentralized data training
The VX360, developed in collaboration with Grab, leverages Tesla’s existing camera systems to collect 360° imagery without requiring expensive new hardware. The data is processed both in the cloud and at the edge smartphones detecting elements like traffic lights and signs in real time. More intensive classification is handled off-device.
Through Bittensor’s decentralized framework, NATIX miners are rewarded for training and improving AI models on the subnet, which are then redeployed across the NATIX Edge Network.
“The NATIX 360Data Subnet on Bittensor represents the convergence of Physical AI and decentralized intelligence,” said NATIX CEO and co-founder Alireza Ghods. “By decentralizing data analysis, we enable the continuous improvement of AI models, significantly enhancing mapping accuracy, autonomous vehicle safety, and real-world responsiveness.”
The subnet’s first focus is roadwork detection, a critical application for both mapping platforms and autonomous vehicle navigation. Over the longer-term, NATIX hopes to expand its operations to include pothole detection and infrastructure analysis. Eventually, NATIX expects to provide a full scenario classification to support autonomous vehicle training.
Q&A with NATIX CEO Alireza Ghods
To better understand the impact of NATIX’s latest launch, crypto.news spoke with Alireza Ghods, CEO and co-founder of NATIX. The conversation focuses on the company’s approach to scaling decentralized data capture, why it believes its data can compete with tech giants, and how it plans to create real demand for its token economy.
The entire Q&A is below:
Alireza Ghods: Data collection for mapping and autonomous driving is super expensive. Large companies that collect this footage are not sharing it as they use that data to have an edge on the product and offering. None of these players have been collecting data at the scale that NATIX can do in a crowdsourced manner. The largest open-source dataset for autonomous driving is Learning to Drive (L2D), a dataset built in collaboration between two projects collected with identical sensor suites installed on 60 EVs operated by driving schools in only German cities over the span of 3 years, which resulted in only 5k hours logged. Since the launch of the VX360 on May 2nd, 2025, we have already collected 2k hours of driving in only 10 days, and with only 1/3 of the total pre-ordered devices live only in the US.
When it comes to the precision in mapping, NATIX may not have the precision and quality of dedicated mapping vehicles, but the sheer size of the network and data collected compensates for it, making the end result as good as data collected by dedicated mapping vehicles. Take Grab, for example. Grab built their whole mapping solution based on crowdsourced efforts, resulting in maps more accurate than Google Maps in SE Asia. For crowdsourced efforts to succeed, you need a large network of drivers/contributors. With over 250K registered drivers, NATIX does have such a network, and it’s the biggest camera DePIN network globally.
AG: For data collected with the VX360, the compute we perform for ScenGen is quite heavy and cannot be done at the Edge. We do the processing at the cloud level. The data shared by users is only uploaded when they’re connected to their home Wi-Fi connection.
With that said, we use our smartphone network, which has edge compute power, to detect map attributes such as traffic signs and traffic lights in real-time.
AG: The VX360 device is quite cost-effective. With a 350$ device, we tap into 360° street-level imagery, while also providing utility for the drivers. Our strategy is tapping into commodity hardware. Everyone owns a smartphone, and Tesla vehicles already come equipped with 360° cameras, so it’s just a matter of tapping into what’s already out there. If we wanted to build a 360° camera, it WOULD be 20-100x more expensive than VX360 in terms of hardware cost, and it would not be scalable as a crowd-sourced solution.
As for further scaling, we are also in discussion with partners who want to operate a fleet of VX360s. This would be fleet owners or integrators who want to sponsor the cost of the device for reward sharing. As an example, we are in discussion with a Tesla fleet owner with over 3,000 Tesla vehicles in their network. We are also in the process of building fleet management functionalities for the fleet operators to increase the utility of our product and go beyond crypto rewards.
AG: We have Grab as our customer, and we are in the process of closing a few of the largest autonomous driving players, both as direct customers for the data and partners for building various products for autonomous driving.
Every big player is a potential customer, if not now, then in the future. Some of the bigger players have invested millions into collecting their own datasets, which soon will become outdated as the roads are constantly changing. Even if some tell us they do not need our data today, they will need it soon enough.
Nevertheless, the AD companies that have invested in their own datasets are not larger than 20% of the market. The rest have either not spent much capital to collect data and will need NATIX’s data, or they’re currently relying on their customers’ data (if any), but cannot use it to improve the quality of their models for other customers. This is a real bottleneck for these companies to create a holistic AD stack that can be monetized for all automotive clients.
AG: There are also successful models like Geodnet. I believe the winner here is the one that focuses on protocol revenue and has a proper mechanism for value accrual to the token (e.g. buyback and burn). Many of these projects suffer from a lack of Protocol revenue or poor tokenomics (lack of value accrual). In this case, NATIX is closer to Geodnet and is following the same strategy.
Furthermore, you have to look at each business model individually. For example, Hivemapper’s front-facing dashcams collect high-quality street-level imagery for mapping, but NATIX’s 360° data can cater not only to mapping but also to autonomous driving and physical AI. 360° video data is the holy grail of street-level visual data, and not only is it worth 5-10x more than Hivemapper’s data, but it opens the door to countless new mapmaking use cases (e.g., POI detection with side cameras).
Moreover, the 360° data collected can be used for Physical AI, which is a much bigger topic (ever since NVIDIA’s CEO Jensen Huang’s big announcement on the stage at CES 2025). NATIX offers data diversity suitable for the training, testing, and validation of the autonomous driving stack and Physical AI at a fraction of the cost of centralized solutions that companies like Nvidia and Uber invested millions to develop. This data is a painkiller for the simulation-to-reality and reality-to-simulation process in Physical AI development.
That is why we believe we’re solving a much bigger problem, which the industry is ready to pay for TODAY.. Note that cars operate with a surrounding camera system; therefore, training, testing, and validation of the autonomous driving stack is only possible with 360° data and cannot be done with front-facing dashcam data.
AG: These are two separate ecosystems with different functionality. NATIX is focusing on data curation, while the StreetVision Subnet focuses on (certain) insight extraction and AI model creation.
Each of these ecosystems generates its own value and has its own value accrual mechanism for their token. If anyone is interested in the NATIX data, the value accrual goes towards the $NATIX token through the Protocol Revenue and buyback and burn mechanism. For those interested in the insights or AI models generated by the StreetVision Subnet, the revenue generated will be used for value accrual towards $dTAO (and partially towards $NATIX as the data that was used for such insight and/or models belongs is provided by NATIX).
Additionally, $NATIX is used for the protocol governance and securing the network. This is done via our staking platform, which enables users to participate in protocol governance. Validators in the StreetVision Subnet are also required to stake $NATIX.
AG: Anyone can run a validator for our subnet. Indeed, we are working with two other major Bittensor validator players that will also run validator nodes. Also, we removed staking or $dTAO holding requirements for miners as we wanted to ensure an open ecosystem. As you know, miner centralization is even a bigger concern than validator as the output of the network depends on them.
AG: StreetVision’s first task will be real-time roadwork detection. Detecting and classifying roadwork in real-time is crucial for mapping updates and AV reliability. More use cases added later will include detecting potholes, road signs, littering, and infrastructure monitoring. StreetVision Subnet will also later include analysis and classification of driving videos into scenarios from routine traffic conditions to rare “edge cases,” as scenario classification is super important for scenario generation for autonomous driving and Physical AI.
Grab is currently using NATIX’s data to build their pipelines for the US and EU markets, and they are a paying customer already.
As the VX360 just went live on May 2nd, we need some initial data to build sample datasets, which are required by autonomous driving customers. We are in commercial negotiations with dozens of autonomous driving companies. We are also working with a few of the top autonomous driving research labs to create cutting-edge products for simulation-to-reality and reality-to-simulation, which we will announce in the coming months.