<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://qiblaqi.github.io/ai-alpha/feed.xml" rel="self" type="application/atom+xml" /><link href="https://qiblaqi.github.io/ai-alpha/" rel="alternate" type="text/html" /><updated>2026-05-02T09:49:23+00:00</updated><id>https://qiblaqi.github.io/ai-alpha/feed.xml</id><title type="html">qiblaqi’s blog</title><subtitle>This is a blog about my personal studies and some thoughts.</subtitle><author><name>qiblaqi</name></author><entry><title type="html">Reflections on my past blogs</title><link href="https://qiblaqi.github.io/ai-alpha/2026/05/02/May2nd.html" rel="alternate" type="text/html" title="Reflections on my past blogs" /><published>2026-05-02T00:00:00+00:00</published><updated>2026-05-02T00:00:00+00:00</updated><id>https://qiblaqi.github.io/ai-alpha/2026/05/02/May2nd</id><content type="html" xml:base="https://qiblaqi.github.io/ai-alpha/2026/05/02/May2nd.html"><![CDATA[<h1 id="week-18-reflections">Week 18 Reflections</h1>

<p>This week was routed for cardio training and loaded muscle training. The efforts were pushed and progress were shown clearly.</p>

<p>As for Work, or to be specific, LLM and other ML tools or model. Here is the details:</p>

<ul>
  <li>stable diffusion models:
    <ul>
      <li>nexblend: lucid,vivian,aurora,and iris models.</li>
      <li>popular tags: loli,petie,chibi,child,lolita fashion, topless, imminent rape,streaming tears</li>
    </ul>
  </li>
  <li>chatgpt 5.5 thinking models: for daily fact check and opinion generation articles</li>
  <li>grok 4.20 expert model: for substance of the chatgpt models, especially for nsfw content or illicit content.</li>
  <li>gemini 3.1 pro model: for health related task, personal goal achievement, fitness training, nutrition guide.</li>
  <li>claude opus 5.7 &amp; sonnet 4.6 model: no use case for week 18, inactive problem solving week.</li>
</ul>

<p>As for hardware upgrade, this week was the start week for new M5 PRo chip with 64GB unified memory experience. Below is a screenshot of my ComfyUI interface after setup:<br />
  <img src="https://raw.githubusercontent.com/qiblaqi/ai-alpha/main/images/2026-05-02/m5pro-fixed.png" alt="M5 PRo info" title="M5 Pro info" /></p>]]></content><author><name>qiblaqi</name></author><summary type="html"><![CDATA[Week 18 Reflections]]></summary></entry><entry><title type="html">Conversation Log</title><link href="https://qiblaqi.github.io/ai-alpha/2025/03/16/march16th.html" rel="alternate" type="text/html" title="Conversation Log" /><published>2025-03-16T00:00:00+00:00</published><updated>2025-03-16T00:00:00+00:00</updated><id>https://qiblaqi.github.io/ai-alpha/2025/03/16/march16th</id><content type="html" xml:base="https://qiblaqi.github.io/ai-alpha/2025/03/16/march16th.html"><![CDATA[<h1 id="conversation-log">Conversation Log</h1>

<p><strong>Date:</strong> March 16, 2025</p>

<h2 id="feelings--challenges">Feelings &amp; Challenges</h2>

<ul>
  <li><strong>Frustration and exhaustion</strong> with repeatedly failing the Flux1.dev test using ComfyUI.</li>
  <li><strong>Fear and anxiety</strong> about the future, related to a 5-year unemployment gap and discontinuing graduate school attendance since 2021.</li>
  <li><strong>Concerns</strong> about neglecting communication from Stevens Institute of Technology for several years.</li>
</ul>

<h2 id="steps-to-re-engage-with-stevens">Steps to Re-engage with Stevens</h2>

<ol>
  <li><strong>Acknowledged</strong> the anxiety and frustration associated with reconnecting after several years.</li>
  <li><strong>Identified practical steps:</strong>
    <ul>
      <li>Check Stevens’ student portal and emails.</li>
      <li>Reach out via email to academic advisor or registrar’s office.</li>
      <li>Drafted a clear, concise email template to initiate reconnection.</li>
    </ul>
  </li>
</ol>

<p><strong>Email Draft Template:</strong></p>

<blockquote>
  <p><strong>Subject:</strong> Reconnecting About My Academic Status
Dear [Advisor’s Name or Registrar Office],
I hope you’re doing well. My name is [Your Name], Student ID #[Your ID]. It’s been a while since I’ve been active academically.<br />
Could you please help me understand my current status and options available to me moving forward?<br />
Thank you for your help.<br />
Warm regards,<br />
[Your Name]</p>
</blockquote>

<h2 id="emotional-reflections">Emotional Reflections</h2>

<ul>
  <li><strong>Experienced relief and comfort</strong> after taking initial action (drafting the email).</li>
  <li><strong>Reconnected</strong> with memories of past academic success, family happiness, and feelings of calm and contentment from high school years.</li>
  <li>Recognized this emotional state as a source of inner strength and reassurance.</li>
</ul>

<h2 id="next-steps">Next Steps</h2>

<ul>
  <li>Hold onto the calm, confident feeling.</li>
  <li>Await response from Stevens and plan next actions.</li>
  <li>Continue small, achievable steps forward.</li>
</ul>

<p><strong>Reminder:</strong> You’re capable, valued, and supported every step of the way.</p>]]></content><author><name>qiblaqi</name></author><summary type="html"><![CDATA[Conversation Log]]></summary></entry><entry><title type="html">Notes for March 15th, 2025</title><link href="https://qiblaqi.github.io/ai-alpha/2025/03/15/march15th.html" rel="alternate" type="text/html" title="Notes for March 15th, 2025" /><published>2025-03-15T00:00:00+00:00</published><updated>2025-03-15T00:00:00+00:00</updated><id>https://qiblaqi.github.io/ai-alpha/2025/03/15/march15th</id><content type="html" xml:base="https://qiblaqi.github.io/ai-alpha/2025/03/15/march15th.html"><![CDATA[<h1 id="notes-for-march-15th">Notes for March 15th</h1>

<ul>
  <li>
    <p><strong>ComfyUI Setup and FLUX.1 Dev Model Download</strong><br />
Installed ComfyUI on my local machine (Ubuntu 22.04) using the official GitHub repository (<code class="language-plaintext highlighter-rouge">git clone https://github.com/comfyanonymous/ComfyUI.git</code>). Set up a Python virtual environment (<code class="language-plaintext highlighter-rouge">python3 -m venv venv &amp;&amp; source venv/bin/activate</code>) and installed dependencies via <code class="language-plaintext highlighter-rouge">pip install -r requirements.txt</code>. Downloaded the FLUX.1 dev model checkpoint (~12GB) from the official Hugging Face repository using <code class="language-plaintext highlighter-rouge">wget</code> and placed it in the <code class="language-plaintext highlighter-rouge">models/checkpoints/</code> directory of ComfyUI. Updated the <code class="language-plaintext highlighter-rouge">config.yaml</code> to point to the new model path and set the default workflow to use FLUX.1 for inference. Initial setup completed with the web UI accessible at <code class="language-plaintext highlighter-rouge">http://localhost:8188</code>.<br />
Below is a screenshot of my ComfyUI interface after setup:<br />
<img src="https://raw.githubusercontent.com/qiblaqi/ai-alpha/main/images/2025-03-15/gpu.jpg" alt="ComfyUI Interface" title="My fomrer gpu Setup" /></p>
  </li>
  <li>
    <p><strong>Learning That TypeScript Backend Interpreter Changed to Go</strong><br />
Discovered that the backend interpreter for my TypeScript project (previously relying on Node.js with <code class="language-plaintext highlighter-rouge">ts-node</code>) has been replaced with a Go-based implementation for better performance and concurrency. The new setup uses <code class="language-plaintext highlighter-rouge">esbuild</code> to transpile TypeScript to JavaScript, followed by a custom Go interpreter (<code class="language-plaintext highlighter-rouge">go run main.go</code>) to execute the compiled code. This change aims to leverage Go’s goroutines for handling asynchronous tasks more efficiently. Spent time reading the project docs and experimenting with the new interpreter by running a sample TypeScript file (<code class="language-plaintext highlighter-rouge">node-to-go.ts</code>) through the pipeline. Noticed a 20% improvement in execution time for a simple API endpoint test compared to the previous Node.js setup.</p>
  </li>
  <li>
    <p><strong>Go Project Learning with <code class="language-plaintext highlighter-rouge">o3-mini-high</code></strong><br />
Started learning Go by working on the <code class="language-plaintext highlighter-rouge">o3-mini-high</code> project, a lightweight Go-based server framework. Cloned the repository (<code class="language-plaintext highlighter-rouge">git clone https://github.com/example/o3-mini-high.git</code>) and set up the Go environment (Go 1.20 installed via <code class="language-plaintext highlighter-rouge">sudo apt install golang-go</code>). Used Visual Studio Code with the Go extension (<code class="language-plaintext highlighter-rouge">gopls</code> for LSP support) and the VSCI OpenAI extension to assist with code generation and debugging. Made progress toward completing Example 2 in the project’s tutorial, which involves building a REST API with endpoints for <code class="language-plaintext highlighter-rouge">/health</code> and <code class="language-plaintext highlighter-rouge">/data</code>. Implemented basic routing using the <code class="language-plaintext highlighter-rouge">net/http</code> package and tested locally with <code class="language-plaintext highlighter-rouge">curl http://localhost:8080/health</code>. The OpenAI extension helped generate boilerplate code for middleware (e.g., logging), saving about 30 minutes of manual setup.</p>
  </li>
</ul>

<h2 id="errors-encountered">Errors Encountered</h2>
<ul>
  <li>
    <p><strong>FLUX.1 Dev Model Did Not Work in My ComfyUI Flow</strong><br />
Encountered a runtime error when attempting to use the FLUX.1 dev model in ComfyUI: <code class="language-plaintext highlighter-rouge">RuntimeError: Expected 4D tensor input, got 3D tensor instead</code>. Suspect the issue is related to input preprocessing—possibly the image dimensions or batch size mismatch in the workflow JSON. Checked the ComfyUI logs (<code class="language-plaintext highlighter-rouge">logs/comfyui.log</code>) and confirmed the model loaded successfully, but the inference step failed at the first node (<code class="language-plaintext highlighter-rouge">CLIPTextEncode</code>). Tried adjusting the input resolution to 512x512 (as per FLUX.1 docs) and reconfiguring the workflow to include a batch dimension (<code class="language-plaintext highlighter-rouge">batch_size=1</code>), but the error persisted. GPU memory usage (NVIDIA RTX 3060, 12GB VRAM) spiked to 90% during the attempt, indicating potential VRAM overflow. Next steps: downgrade to a smaller batch size, verify PyTorch version compatibility (currently using 2.0.1), and check the ComfyUI GitHub issues for similar reports.</p>
  </li>
  <li>
    <p>**ERRORS fixed
It turns out to be MPS does not support bfloat16 version of data type. So I changed it to fp8 model.</p>

    <p>BF16 (bfloat16) and FP16 (float16) are both 16‐bit floating-point formats, but they differ in how they split bits between the exponent and the significand:</p>

    <p>•	FP16: Uses 1 bit for the sign, 5 bits for the exponent, and 10 bits for the significand. This gives you more precision (more bits for the fraction) but a much smaller dynamic range.
•	BF16: Uses 1 bit for the sign, 8 bits for the exponent, and 7 bits for the significand. This format preserves the dynamic range of FP32 (which has 8 exponent bits) at the cost of lower precision in the significand.</p>

    <p>In practice, BF16 is especially useful in deep learning because many models need the wider range to avoid numerical underflow/overflow during training, even if they can tolerate lower precision.</p>

    <p>Regarding CUDA support, while FP16 has been a standard part of the CUDA ecosystem for many years, BF16 is a more recent addition. Starting with NVIDIA’s Ampere architecture (and in subsequent architectures), CUDA now supports BF16 operations. This means that on supported GPUs (e.g. A100), you can use BF16 with libraries like PyTorch (via torch.bfloat16). It’s also commonly used on CPUs (and TPUs) for mixed-precision training.</p>

    <p>Thus, yes—BF16 is now part of the CUDA community on modern NVIDIA hardware.</p>
  </li>
  <li>
    <p>**ERRORS Remains and other solutions
LOL, it turns out that my previous solution does not solve the errors at the end! The final solution is to use diffusionbee. :D gg noob.</p>
  </li>
</ul>]]></content><author><name>qiblaqi</name></author><summary type="html"><![CDATA[Notes for March 15th]]></summary></entry><entry><title type="html">Notes for March 12th, 2025</title><link href="https://qiblaqi.github.io/ai-alpha/2025/03/12/march12th.html" rel="alternate" type="text/html" title="Notes for March 12th, 2025" /><published>2025-03-12T00:00:00+00:00</published><updated>2025-03-12T00:00:00+00:00</updated><id>https://qiblaqi.github.io/ai-alpha/2025/03/12/march12th</id><content type="html" xml:base="https://qiblaqi.github.io/ai-alpha/2025/03/12/march12th.html"><![CDATA[<h1 id="notes-for-march-12th">Notes for March 12th</h1>

<ul>
  <li>Checking on the latest progress for deploying the 2.5B DeepSeek R1 model.</li>
  <li>Using Gemma-3 to generate a draft prompt based on an image I selected.</li>
</ul>]]></content><author><name>qiblaqi</name></author><summary type="html"><![CDATA[Notes for March 12th]]></summary></entry><entry><title type="html">Notes for March 11th, 2025</title><link href="https://qiblaqi.github.io/ai-alpha/2025/03/11/March11_Reflections_on_the_past.html" rel="alternate" type="text/html" title="Notes for March 11th, 2025" /><published>2025-03-11T00:00:00+00:00</published><updated>2025-03-11T00:00:00+00:00</updated><id>https://qiblaqi.github.io/ai-alpha/2025/03/11/March11_Reflections_on_the_past</id><content type="html" xml:base="https://qiblaqi.github.io/ai-alpha/2025/03/11/March11_Reflections_on_the_past.html"><![CDATA[<h2 id="open-manus-workflow-implementation-for-multi-model-ai-agent">Open-manus: Workflow Implementation for Multi-Model AI Agent</h2>

<p>Today, I focus on creating the workflow implementation to use a multi-model AI agent. This involves coordinating various AI models to work together effectively, each leveraging their strengths to handle different aspects of a task. The goal is to enhance the efficiency and effectiveness of AI solutions by integrating multiple models seamlessly.</p>

<h2 id="reflection-on-llm-study-april-23rd-2023---july-8th-2023">Reflection on LLM Study (April 23rd, 2023 - July 8th, 2023)</h2>

<p>Looking back on my study of Large Language Models (LLMs) that spanned from April 23rd, 2023 to July 8th, 2023, I have several reflections:</p>

<h3 id="my-thoughts-on-aborting-the-llm-study-blog">My Thoughts on Aborting the LLM Study Blog</h3>

<ul>
  <li><strong>Ashamed</strong>: I feel ashamed for not continuing with my blog during this period.</li>
  <li><strong>Regret Not Keeping the Vibe</strong>: I regret not maintaining the momentum and consistency of my blog.</li>
  <li><strong>Urged to Keep My Work Going</strong>: Despite the setbacks, I feel a strong urge to continue my work and keep writing no matter what happens.</li>
</ul>

<p>These reflections help me understand the importance of consistency and perseverance in my work and blogging journey.</p>]]></content><author><name>qiblaqi</name></author><summary type="html"><![CDATA[Open-manus: Workflow Implementation for Multi-Model AI Agent]]></summary></entry><entry><title type="html">My Notes for July 8th</title><link href="https://qiblaqi.github.io/ai-alpha/2023/07/08/july8.html" rel="alternate" type="text/html" title="My Notes for July 8th" /><published>2023-07-08T00:00:00+00:00</published><updated>2023-07-08T00:00:00+00:00</updated><id>https://qiblaqi.github.io/ai-alpha/2023/07/08/july8</id><content type="html" xml:base="https://qiblaqi.github.io/ai-alpha/2023/07/08/july8.html"><![CDATA[<h1 id="my-notes-for-july-8th-2023">My Notes for July 8th, 2023</h1>

<h2 id="build-an-apple-automator-tools-to-clean-up-my-clipboard">build an apple automator tools to clean up my clipboard</h2>

<p>first, create an automator document, and choose to run a shell script, then use the pbpaste and pbcopy command to clean up the clipboard. Last but not least, save the document and name it clean up clipboard.
 done
 <!-- tbd --></p>]]></content><author><name>qiblaqi</name></author><summary type="html"><![CDATA[My Notes for July 8th, 2023]]></summary></entry><entry><title type="html">Using Apple Silicon to Run GPT</title><link href="https://qiblaqi.github.io/ai-alpha/2023/07/03/july3.html" rel="alternate" type="text/html" title="Using Apple Silicon to Run GPT" /><published>2023-07-03T00:00:00+00:00</published><updated>2023-07-03T00:00:00+00:00</updated><id>https://qiblaqi.github.io/ai-alpha/2023/07/03/july3</id><content type="html" xml:base="https://qiblaqi.github.io/ai-alpha/2023/07/03/july3.html"><![CDATA[<h1 id="using-apple-scilicon-to-run-gpt">using apple scilicon to run gpt</h1>

<p>abortion. nothing created by my action. nothing.</p>]]></content><author><name>qiblaqi</name></author><summary type="html"><![CDATA[using apple scilicon to run gpt]]></summary></entry><entry><title type="html">Maximalism vs Minimalism</title><link href="https://qiblaqi.github.io/ai-alpha/2023/06/30/june30.html" rel="alternate" type="text/html" title="Maximalism vs Minimalism" /><published>2023-06-30T00:00:00+00:00</published><updated>2023-06-30T00:00:00+00:00</updated><id>https://qiblaqi.github.io/ai-alpha/2023/06/30/june30</id><content type="html" xml:base="https://qiblaqi.github.io/ai-alpha/2023/06/30/june30.html"><![CDATA[<h2 id="no1-maximalism-vs-minimalism">no.1 maximalism vs minimalism</h2>

<p>the link is: <a href="https://www.harpersbazaar.com/uk/culture/lifestyle_homes/a35427044/minimalism-vs-maximalism-which-is-more-stylish/">maximalism vs minimalism</a></p>

<p>this article is about</p>]]></content><author><name>qiblaqi</name></author><summary type="html"><![CDATA[no.1 maximalism vs minimalism]]></summary></entry><entry><title type="html">Notes for June 28th</title><link href="https://qiblaqi.github.io/ai-alpha/2023/06/28/june28.html" rel="alternate" type="text/html" title="Notes for June 28th" /><published>2023-06-28T00:00:00+00:00</published><updated>2023-06-28T00:00:00+00:00</updated><id>https://qiblaqi.github.io/ai-alpha/2023/06/28/june28</id><content type="html" xml:base="https://qiblaqi.github.io/ai-alpha/2023/06/28/june28.html"><![CDATA[<!-- time: 5pm -->
<p>the key to form and learn new things.
So I’m thinking about um wow. Well how to form and learn new things mean, It’s like using The AI tools to help me to do things that I never done before without looking for instruction or knowledge to fully understand it first.</p>

<p>Um, P S. This way of writing might be better than the traditional way for me. I like it. I like it a lot. 
<!-- 
[]: # 
[]: # ## no.2
[]: # 
[]: # link is: https://techxplore.com/news/2023-06-ai-robotics-technologies.html
[]: # 
[]: # this is the article about how AI and robotics technologies are changing the world. 
[]: # 
[]: # I think it's a good thing. --></p>]]></content><author><name>qiblaqi</name></author><summary type="html"><![CDATA[the key to form and learn new things. So I’m thinking about um wow. Well how to form and learn new things mean, It’s like using The AI tools to help me to do things that I never done before without looking for instruction or knowledge to fully understand it first.]]></summary></entry><entry><title type="html">Notes for June 24th</title><link href="https://qiblaqi.github.io/ai-alpha/2023/06/24/june24.html" rel="alternate" type="text/html" title="Notes for June 24th" /><published>2023-06-24T00:00:00+00:00</published><updated>2023-06-24T00:00:00+00:00</updated><id>https://qiblaqi.github.io/ai-alpha/2023/06/24/june24</id><content type="html" xml:base="https://qiblaqi.github.io/ai-alpha/2023/06/24/june24.html"><![CDATA[<!-- I want to write down my thoughts about the news I just read and have the copilot to comment down my each thoughts? or just talk with me about it. -->

<h2 id="no1-human-and-ai-hallucinate">no.1 human and ai hallucinate</h2>

<p>link is: (https://techxplore.com/news/2023-06-humans-ai-hallucinatebut.html)</p>

<p>this is the article about what human do as hallucination and</p>]]></content><author><name>qiblaqi</name></author><summary type="html"><![CDATA[]]></summary></entry></feed>