As nations race to build the infrastructure that will underpin the next wave of artificial intelligence, access to cutting-edge compute is becoming as strategic as talent itself. Eleveight AI’s decision to deploy NVIDIA’s latest Blackwell B300 GPUs, the first such installation in Armenia, marks a significant milestone for the country’s AI ambitions and signals a broader shift in how emerging tech hubs are positioning themselves on the global stage.
In this interview, Arman Aleksanian, CEO and Board Member of Eleveight AI, discusses the strategic rationale behind investing early in next-generation AI accelerators, the evolving nature of AI workloads, and the technical demands of operating high-density GPU data centres. The conversation also explores Armenia’s growing role in the international AI ecosystem, the importance of sustainable energy in large-scale computing, and Eleveight AI’s roadmap for supporting enterprises, startups, and research organisations in the years ahead.
Eleveight AI’s deployment of NVIDIA Blackwell B300 GPUs is a first for Armenia. What strategic considerations drove the decision to invest in this latest-generation AI accelerator at this stage?
The decision to deploy NVIDIA Blackwell B300 was based on a clear view of how quickly AI workloads are evolving. Models are becoming larger, inference is more complex, and efficiency requirements are rising fast. Building infrastructure around previous-generation hardware would have limited both our clients and our long-term plans. By investing in Blackwell now, we aligned Eleveight AI with where AI is going over the next several years rather than where it has been. This also allows Armenia to operate on the same level of AI infrastructure as leading global technology hubs from day one.
The Blackwell B300 is designed for large-scale training and inference. Which AI workloads such as generative AI, enterprise applications, or research computing do you expect to see the fastest uptake?
We expect the fastest uptake to come from generative AI, particularly large language models and multimodal systems that require both large-scale training and high-throughput inference. At the same time, enterprise AI is expanding rapidly in areas such as finance, security, analytics and automation, where stable performance and data locality are critical. We are also seeing strong interest from applied research teams. Across all these use cases, the unifying factor is the need for scalable, efficient and predictable computing infrastructure.
You have emphasised infrastructure readiness, including power stability and cooling. What were the primary technical challenges in preparing the data centre for high-density GPU operations, and how were they addressed?
High-density GPU operations require a fundamentally different approach to data centre design. The main challenges were ensuring stable power delivery, managing heat under sustained heavy loads, and maintaining long-term reliability. We addressed these issues through redundant power architecture, advanced cooling systems designed specifically for GPU-intensive workloads, and close integration between facility engineering, hardware and software. Infrastructure readiness for us is not a one-time task, but a continuous optimisation process.
As Armenia positions itself on the global AI infrastructure map, how do you see the country’s role evolving in the international AI ecosystem over the next few years?
Armenia has a strong opportunity to move beyond being primarily a talent exporter and become a regional AI infrastructure and innovation hub. With advanced computing resources available locally, startups, enterprises and research teams can build globally competitive AI systems without relocating. Over the next few years, we see Armenia playing an increasingly important role as a bridge between regional innovation and the wider international AI ecosystem.
How does the use of renewable energy sources support the operational and economic viability of your large-scale AI computing strategy?
Sustainability is essential for making large-scale AI computing viable in the long term. Renewable energy improves cost predictability, reduces exposure to energy price volatility and supports stable scaling. From an operational perspective, energy efficiency directly affects competitiveness, especially for AI workloads that run continuously. Integrating renewable energy allows us to grow responsibly while maintaining economic efficiency.
What should enterprises, startups, and research organisations expect from Eleveight AI in 2026 and beyond regarding capacity expansion and new services?
Looking ahead to 2026 and beyond, clients can expect continued expansion of computing capacity alongside a broader set of AI-focused services. Our roadmap goes beyond simply providing compute. We are focused on simplifying experimentation, enabling faster scaling, and making the transition from training to production more seamless. Ultimately, the goal is to remove infrastructure complexity so teams can focus on building and deploying AI systems.


