What NVIDIA provides is the cuFFT library one can directly use, #include <iostream> #include <cufft.h> int main() { const int N = 8; // Host input (complex numbers: float2) cufftComplex h_signal[N]; for (int i = 0; i < N; ++i) { h_signal[i].x = i; // real part h_signal[i].y = 0.0f; // imaginary part } // … Continue reading Explore cuFFT
HTTP 402 and Stablecoin
What HTTP 402 Is? When Tim Berners-Lee designed HTTP in the 1990s, he reserved status code 402: “Payment Required” for a future where websites could charge users automatically for access or data. It was never implemented widely because: No native internet payment system existed; Credit cards required human input; Micropayments (fractions of a cent) weren’t … Continue reading HTTP 402 and Stablecoin
CUDA Libraries I Will Explore
There are about 900 estimated CUDA libraries from NVIDIA. I will select the ones relevant to my needs. LibraryPurposeRobotics Use cuBLASGPU-accelerated linear algebra (matrix mult, LU, QR)Rigid-body dynamics, transforms cuSOLVERLinear system & eigen decompositionInverse kinematics, least squares cuSPARSE / cuDSSSparse matrices & solversLarge Jacobian systems, graph optimization cuRANDRandom number generationMonte Carlo simulations, sensor noise cuFFTFast … Continue reading CUDA Libraries I Will Explore
NVIDIA Omniverse and AI Factory
NVIDIA Omniverse is a GPU-powered platform for building, simulating, and collaborating inside realistic, physics-based 3D worlds — used for robotics, digital twins, and AI-driven system design. NVIDIA Omniverse is built directly on top of CUDA and related NVIDIA SDKs. It’s essentially a “metaverse operating system” powered by accelerated computing. It's realized by foundational CUDA packages: … Continue reading NVIDIA Omniverse and AI Factory
NV-Tesseract Time Series Models
NV-Tesseract is a family of deep‐learning models designed specifically for time‐series data (i.e., sequences of values over time) rather than just static tabular or image data. presentation below is by Weiji Chen, AI/ML Engineer, NVIDIA Evolving to Tesseract 2.0 Then Tesseract 2.0 integrate rest of NVIDIA technologies, such as RAPIDs(a python c++ libraries leveraging CUDA-X, cuDF, cuPy, RMM … Continue reading NV-Tesseract Time Series Models
cuFFT Library Guide
Fast Fourier Transform (FFT) is a useful tool used in many areas, and we understand how demanding its computation can be. GPUs help manage this heavy processing, making a big difference for those working on these tasks daily. #include <stdio.h> #include <stdlib.h> #include <math.h> #include <cufft.h> #include <cuda_runtime.h> // Error checking macro for CUDA calls … Continue reading cuFFT Library Guide
Quant shops Utilize Python to Take Advantage of GPU Computing Benefits
Here's a more complete GPU-accelerated signal generation pipeline for quantitative finance using Python with CuPy and Numba.cuda. It includes: Data simulation (price series) GPU-based moving averages A nonlinear filter kernel (Numba.cuda) Signal generation logic (crossover + filter) Batch processing multiple assets in parallel import cupy as cp import numpy as np from numba import cuda … Continue reading Quant shops Utilize Python to Take Advantage of GPU Computing Benefits
Claude Code CLI versus Using the Same LLM in IDEs like Cursor or Windsurf (Codex vs GPT-Codex 5)
I noticed something quite interesting lately — using Claude Sonnet 4.5 inside Claude Code CLI feels different from using the exact same model inside Cursor, Windsurf, or other IDEs.And honestly, Claude Code CLI outperforms the rest by a large margin. Even though both environments call the same LLM — Claude Sonnet 4.5 — the experience … Continue reading Claude Code CLI versus Using the Same LLM in IDEs like Cursor or Windsurf (Codex vs GPT-Codex 5)
New Themes and Ideas from NVIDIA GTC 2025
Jensen Huang's GTC keynote big takeaway: We are witnessing another paradigm shift — an Apollo moment for technology.AI is no longer a tool that aids work; it has become the work itself, solving problems directly rather than enabling humans to do so. With the end of Moore’s Law, traditional CPU scaling has plateaued. The answer … Continue reading New Themes and Ideas from NVIDIA GTC 2025
Three Years After ChatGPT: Why Most Firms Still Struggle to Build Their Own AI Models
It has been three years since ChatGPT first reshaped the AI landscape, yet surprisingly few organizations have managed to develop their own successful large language models (LLMs) trained on proprietary data. When I first thought about why this was happening, I suspected the problem lay in tokenization — the seemingly simple yet intricate process of … Continue reading Three Years After ChatGPT: Why Most Firms Still Struggle to Build Their Own AI Models