CUDA Libraries I Will Explore

There are about 900 estimated CUDA libraries from NVIDIA. I will select the ones relevant to my needs. LibraryPurposeRobotics Use cuBLASGPU-accelerated linear algebra (matrix mult, LU, QR)Rigid-body dynamics, transforms cuSOLVERLinear system & eigen decompositionInverse kinematics, least squares cuSPARSE / cuDSSSparse matrices & solversLarge Jacobian systems, graph optimization cuRANDRandom number generationMonte Carlo simulations, sensor noise cuFFTFast … Continue reading CUDA Libraries I Will Explore

NVIDIA Omniverse and AI Factory

NVIDIA Omniverse is a GPU-powered platform for building, simulating, and collaborating inside realistic, physics-based 3D worlds — used for robotics, digital twins, and AI-driven system design. NVIDIA Omniverse is built directly on top of CUDA and related NVIDIA SDKs. It’s essentially a “metaverse operating system” powered by accelerated computing. It's realized by foundational CUDA packages: … Continue reading NVIDIA Omniverse and AI Factory

NV-Tesseract Time Series Models

NV-Tesseract is a family of deep‐learning models designed specifically for time‐series data (i.e., sequences of values over time) rather than just static tabular or image data. presentation below is by Weiji Chen, AI/ML Engineer, NVIDIA Evolving to Tesseract 2.0 Then Tesseract 2.0 integrate rest of NVIDIA technologies, such as RAPIDs(a python c++ libraries leveraging CUDA-X, cuDF, cuPy, RMM … Continue reading NV-Tesseract Time Series Models

Quant shops Utilize Python to Take Advantage of GPU Computing Benefits

Here's a more complete GPU-accelerated signal generation pipeline for quantitative finance using Python with CuPy and Numba.cuda. It includes: Data simulation (price series) GPU-based moving averages A nonlinear filter kernel (Numba.cuda) Signal generation logic (crossover + filter) Batch processing multiple assets in parallel import cupy as cp import numpy as np from numba import cuda … Continue reading Quant shops Utilize Python to Take Advantage of GPU Computing Benefits

Claude Code CLI versus Using the Same LLM in IDEs like Cursor or Windsurf (Codex vs GPT-Codex 5)

I noticed something quite interesting lately — using Claude Sonnet 4.5 inside Claude Code CLI feels different from using the exact same model inside Cursor, Windsurf, or other IDEs.And honestly, Claude Code CLI outperforms the rest by a large margin. Even though both environments call the same LLM — Claude Sonnet 4.5 — the experience … Continue reading Claude Code CLI versus Using the Same LLM in IDEs like Cursor or Windsurf (Codex vs GPT-Codex 5)

New Themes and Ideas from NVIDIA GTC 2025

Jensen Huang's GTC keynote big takeaway: We are witnessing another paradigm shift — an Apollo moment for technology.AI is no longer a tool that aids work; it has become the work itself, solving problems directly rather than enabling humans to do so. With the end of Moore’s Law, traditional CPU scaling has plateaued. The answer … Continue reading New Themes and Ideas from NVIDIA GTC 2025

Three Years After ChatGPT: Why Most Firms Still Struggle to Build Their Own AI Models

It has been three years since ChatGPT first reshaped the AI landscape, yet surprisingly few organizations have managed to develop their own successful large language models (LLMs) trained on proprietary data. When I first thought about why this was happening, I suspected the problem lay in tokenization — the seemingly simple yet intricate process of … Continue reading Three Years After ChatGPT: Why Most Firms Still Struggle to Build Their Own AI Models