Perplexity Lab:  Smart Research

Perplexity Lab:  Smart Research

In the fast-expanding world of AI search and knowledge tools, Perplexity AI has emerged as a rising star—and its new feature, Perplexity Lab, adds serious brainpower to the mix. If you’ve been frustrated with vague answers, shallow summaries, or endless Google rabbit holes, this new tool might just change your workflow forever.

What Is Perplexity Lab?

Perplexity Lab is a recently released feature from the AI search startup Perplexity.ai. It builds on the platform’s already impressive real-time search capabilities by letting users customize, control, and expand how the AI gathers and refines information.

Think of it as the difference between having a really smart assistant—and building that assistant’s brain yourself. Perplexity Lab allows you to:

  • Create custom research agents that stay focused on your topic
  • Use live web data, including citations, to ensure up-to-date answers
  • Ask follow-up questions that build intelligently on prior responses
  • Organize your queries and discoveries in shareable workspaces

It’s research, but way more powerful—and way less scatterbrained.

What Makes Perplexity Lab Stand Out?

Perplexity has always stood out for its elegant combo of search engine + language model. You get concise answers, clear sources, and a sane UI. But with Lab, it levels up into a power tool. Here’s where it shines:

1. Deep-Dive Research with Memory

Unlike the usual “ask and forget” model of chat-based AI, Perplexity Lab lets you build a persistent line of inquiry. It remembers your previous questions, citations, and insights—and lets you string them together like beads on a necklace of knowledge.

If you’re writing a paper, prepping for a podcast, or designing a course, this continuity is golden.

2. Transparent Sourcing

Each answer in Perplexity Lab comes with clickable citations, so you can fact-check and explore the original sources. No more mystery meat AI answers—you see the ingredients.

This is particularly useful for:

  • Journalists checking facts in real time
  • Researchers building citation trails
  • Curious minds who just don’t trust black boxes

3. Custom “Agents” with Instructions

You can now create your own AI agents tailored to specific tasks. Want one to specialize in crypto regulations, another for gourmet cooking, and a third for analyzing academic papers? Go for it.

Each Lab agent can have custom instructions, memory, and topic boundaries. It’s a bit like cloning your brain and training it to be a domain expert—without the coffee addiction.

4. Collaboration and Shareability

Perplexity Lab encourages you to save and share your research paths. If you’re working with a team or teaching others, this is a huge bonus. No more emailing 20 links and scribbled notes—you can just hand them a living, navigable thread of inquiry.

What It’s Especially Good At

While it’s versatile, here’s what Perplexity Lab really excels at:

  • Up-to-the-minute research using real-time web data
  • Comparative analysis across multiple sources
  • Synthesizing complex topics (e.g., AI policy, scientific debates)
  • Building learning paths on new subjects
  • Prepping briefs or outlines for writing projects

And because it draws from a curated mix of sources—including academic papers, news articles, and even Reddit—it’s not locked into the usual walled gardens.

A Personal Take

As someone who writes, researches, and occasionally disappears into Wikipedia wormholes, I’ve found Perplexity Lab to be a welcome upgrade. It feels more like a thinking partner than just a search engine. And in a world drowning in content, tools that help you think clearly are priceless.

I found it particularly effective with market technical analyses. It can quickly research, identify and analyze market movements and make projections based on a particular investment model you may favor.

It’s not perfect yet—it could benefit from better export options, and the UI sometimes hides its own power features—but the trajectory is impressive.

If ChatGPT is your workshop, Perplexity Lab might just be your library and research team rolled into one.

Bittensor: Crypto Brain

Bittensor: Crypto Brain

Imagine if Bitcoin and ChatGPT had a lovechild raised by a swarm of AI researchers. That’s Bittensor. It’s not just another blockchain project or a crypto token with a cute mascot. It’s a radically different approach to AI development—open, decentralized, and incentivized.

What Is Bittensor?

Bittensor is a decentralized network designed to train and reward artificial intelligence models using blockchain economics. Instead of AI being locked inside giant corporate vaults (looking at you, OpenAI and Google), Bittensor spreads the task across a peer-to-peer network where contributors are compensated in TAO, the native token.

It’s kind of like if SETI@home, Bitcoin mining, and Hugging Face all moved into the same hacker house. Everyone contributes compute power or AI models—and gets paid if their work is valuable.

How It Works (Without Melting Your Brain)

The Bittensor network is structured around “subnets,” each designed for a specific kind of AI task—language models, image generation, or other cognitive workloads. Developers build and run machine learning models on these subnets.

Here’s the clever bit: these models are evaluated by the network based on how useful they are. If your AI model gives smart answers or creative outputs, the network rewards you with TAO. Think of it like merit-based mining, where the best minds (and models) win.

What You Can Do With It

If you’re technically inclined, you can:

  • Run a miner (basically provide compute resources to help evaluate models).
  • Train or deploy your own model and join a subnet.
  • Earn TAO by making useful AI contributions to the network.

And if you’re less technical, you can still:

  • Buy and hold TAO, betting on the future of open-source AI.
  • Support the decentralization of AI infrastructure (no PhD required).

Why It Matters

Bittensor offers a counter-narrative to the current AI gold rush. Instead of locking innovation inside the skyscrapers of Silicon Valley, it invites the world to participate—and to be compensated fairly for it. It aligns economic incentives with AI development in a way that’s permissionless, open-source, and transparent.

It’s also part of a broader trend: the decentralization of intelligence. Just like Bitcoin separated money from the state, Bittensor aims to separate AI from corporate control. That could have profound implications for who owns the future—and who gets to shape it.

Bittensor is still in it’s early phase. There are kinks to iron out, the tech is non-trivial, and it won’t make you rich overnight. But for those who believe that intelligence—like money—should be free, distributed, and fairly rewarded, it might just be one of the most important projects in crypto today. You can check out their cool website here: Bittensor.com

And, in case you’re wondering, this blog post was not written by a Bittensor subnet. Yet.

The Altman-Ive AI Device

The Altman-Ive AI Device

What happens when the design visionary behind the iPhone teams up with the most forward-facing leader in artificial intelligence? You get a project that may very well rewrite how we relate to our devices—and perhaps even reimagine what a “device” is.

OpenAI’s CEO Sam Altman and former Apple chief design officer Jony Ive have quietly been working on a new kind of AI gadget, and while the details are still under wraps, what’s emerging is nothing short of a tectonic shift in how we interact with computing. Their goal? A screenless, intuitive AI companion that could make today’s smartphones feel like relics.

The Birth of a New Category

Unlike smartphones, tablets, or even smartwatches, this new device aims to be something altogether different: a context-aware, voice-first AI assistant that requires no screen and minimal input. It won’t replace your laptop or phone but instead slip into the space between them—a quiet, ever-present companion. Think of it as a kind of AI whisperer, always listening, always ready, but never in your face.

According to leaks and reports from The Verge and AppleInsider, this project is not just a moonshot—it’s already backed by billions. OpenAI acquired Ive’s hardware startup, “io,” with funding reportedly in the range of $6.5 billion. That’s a serious commitment to a future where ambient, AI-first computing is the norm.

What Might It Look Like?

While we don’t have official images, concept renderings suggest a device about the size of an iPod Shuffle, perhaps worn around the neck or clipped to clothing. Others imagine a disc-shaped object sitting quietly on a desk, always listening, always ready. Some even speculate a pendant-style form factor—minimalist, tactile, and elegant. You won’t be scrolling through it. You’ll be talking to it.

Designer Ben Geskin shared speculative visuals on X (formerly Twitter) that highlight the possibilities: sleek aluminum bodies, subtle LED indicators, a wearable loop. What’s clear is that the device is meant to be as invisible as possible. Its intelligence will come not from what it shows, but from what it understands—about you, your surroundings, and your needs in real time.

Back to the Future of Interaction

Jony Ive and Sam Altman are both well known for questioning the status quo. Ive has often spoken about the “tyranny of the screen” and the addictive behaviors modern smartphones encourage. Altman, meanwhile, has spent years envisioning what it means for artificial intelligence to become not just a tool, but a partner. Their collaboration is about creating a new kind of interface—one where the user is liberated from the glass rectangle.

It’s also a rebuttal to the current trend of “bigger and better” displays. In their eyes, the next wave of progress means fewer buttons, fewer distractions, and more ambient intelligence. AI that works in the background, not on your retina.

The Shrinking of the Interface

This device is part of a much larger trend: the miniaturization of intelligence. As chips get smaller, sensors more sophisticated, and AI models more efficient, the idea of the “device” begins to blur. Today, we carry smartphones. Tomorrow, we might wear pendants. And soon after, we may simply embed the intelligence into ourselves—glasses, earbuds, even neural implants.

We’re moving down a trajectory where AI assistants might ultimately vanish from view altogether. Today’s iPhone was yesterday’s iMac, and tomorrow’s AI interface may be no more visible than a hearing aid. In that light, the Altman-Ive device feels like a bridge—a necessary stepping stone between screen-based computing and truly ambient intelligence.

What Will It Do?

The core functionality of the device seems centered around contextual awareness. It will use microphones (and possibly cameras) to take in ambient information—your tone of voice, your surroundings, your habits—and offer intelligent assistance without needing prompts. Imagine walking into your kitchen and saying, “What should I cook for dinner with what I’ve got in the fridge?” Or getting a quiet reminder as you leave the house: “Don’t forget your umbrella, rain is due in 20 minutes.”

Unlike a phone or smart speaker, this device won’t wait for you to summon it—it will proactively assist, like a digital valet. It also won’t assume you want a screen-based answer. It may whisper suggestions through a bone-conduction speaker or tap into nearby screens when visuals are required.

Privacy and Ethics in a World of Always-On AI

Of course, a device that is always listening raises privacy concerns. Both Altman and Ive have spoken publicly about the need for strong ethical frameworks in AI and design. The challenge here will be enormous: How do you create a device that listens without intruding? That watches without storing? That helps without surveilling?

It’s likely this product will come with strict privacy protocols, local processing where possible, and clear user controls. But given OpenAI’s increasingly vast training data needs and the device’s potential as an always-on microphone, the public’s trust will be both hard-earned and crucial.

The Broader Impact

If successful, the Altman-Ive device could do for AI what the iPhone did for mobile computing: create a new category. Competitors like Humane and Rabbit are already in this space, but none carry the same design pedigree or AI muscle. We could be witnessing the dawn of a new kind of interface war—not over screen size or megapixels, but over presence, subtlety, and contextual intelligence.

Is This the Next Big Thing?

Possibly. This collaboration taps into something deeper than tech trends—it taps into our desire for simplicity, for elegance, for technology that disappears rather than dominates. And it nudges us toward a future that’s long been whispered about in science fiction: AI not as a thing we use, but a presence we live with.

From the desktop to the pocket, and now perhaps to the pendant—or even the bloodstream—intelligence is becoming smaller, quieter, and more intimate. The Altman-Ive AI device isn’t just about inventing a gadget. It’s about reimagining our relationship with technology entirely.

And if history is any guide, when Jony Ive designs something new and Sam Altman trains its brain… we’d be wise to pay attention. I want one!

Leonardo for Multiple Models

Leonardo for Multiple Models

Leonardo.ai has rapidly become a go-to platform for creatives exploring AI-generated imagery. Its strength lies in offering a diverse suite of models tailored to various artistic needs all in one place. Particularly interesting to me is the recent addition of Black Forest Lab’s Flux.1 Kontext model and the GPT-Image-1 model. These 2 alone greatly enhance Leonardo’s capabilities, providing users with advanced tools for image generation.

I’m a bit of an AI image generation junky, so I burned through my free credits in 10 minutes. But Leonardo.ai offers a reasonable paid plan at $12 a month that provides enough credits to get your feet wet.

Leonardo.ai’s Model Lineup

Leonardo.ai offers a range of models, each designed for specific styles and outputs:

  • Leonardo Lightning XL: A high-speed generalist model suitable for various styles, from photorealism to painterly effects.
  • Leonardo Anime XL: Tailored for anime and illustrative styles, delivering high-speed, high-quality outputs.
  • Leonardo Kino XL: Focuses on cinematic outputs, excelling at wider aspect ratios without requiring negative prompts.
  • Leonardo Vision XL: Versatile in producing realistic and photographic images, especially effective with detailed prompts.
  • Leonardo Diffusion XL: An evolution of the core Leonardo model, capable of generating stunning images even with concise prompts.
  • AlbedoBase XL: Leans towards CG artistic outputs, offering a generalist approach with a unique flair.

Introducing Flux.1 Kontext and GPT-Image-1

Leonardo.ai’s recent integration of Flux.1 Kontext and GPT-Image-1 models marks a significant advancement in AI image generation:

Flux.1 by Black Forest Labs

Flux.1 is renowned for its photorealistic outputs and prompt adherence. Developed by Black Forest Labs, it offers multiple variants:

  • Flux.1 Schnell: An open-source model under the Apache License, providing fast and efficient image generation.
  • Flux.1 Dev: Available under a non-commercial license, suitable for development and testing purposes.
  • Flux.1 Pro: A proprietary model offering high-resolution outputs and advanced features like Ultra and Raw modes, delivering images up to 4 megapixels with enhanced realism.

Flux.1’s integration into Leonardo.ai allows users to generate images with exceptional detail and accuracy, making it a valuable tool for professionals and hobbyists alike.

GPT-Image-1 by OpenAI

GPT-Image-1 introduces a novel approach by enabling multi-image referencing. Users can input up to five images, and the model intelligently combines elements from each based on textual instructions. This capability is particularly useful for creating composite images or blending styles and themes seamlessly.

Both Flux.1 and GPT-Image-1 are accessible through Leonardo.ai’s Omni Editor, offering users flexibility in creation and editing processes.

Why Choose Leonardo.ai?

Leonardo.ai stands out for its comprehensive suite of models catering to diverse creative needs. Whether you’re aiming for photorealism, anime-style illustrations, or cinematic visuals, Leonardo.ai provides the tools to bring your vision to life. The addition of Flux.1 and GPT-Image-1 further enhances its versatility, making it a robust platform for AI-driven image generation.

It’s nice to have a one-stop-shop for image generation. The only thing I’m worried about is burning through my credits in one day!

Popular AI Chatbots Compared

Popular AI Chatbots Compared

If you’ve ever felt like you’re speed-dating AIs just to find the one that gets your weird mix of questions, creativity, and curiosity—you’re not alone. The new generation of AI assistants in 2025 feels a bit like assembling your dream band: each has its own strengths, quirks, and genre.

In this post, I’ll pit four of the biggest players against each other—OpenAI’s ChatGPT-4o, Anthropic’s Claude Sonnet 4, Google’s Gemini 2.5 Flash, and Perplexity’s Standard model—to see how they stack up across key areas. Keep in mind, these are the free versions of each. If you go for the paid plans (usually around $20 a month), you get even more capabilities.

Intelligence and Understanding

ChatGPT-4o (OpenAI)

OpenAI’s “omnimodel” (that’s what the “o” stands for) is the most balanced conversationalist of the bunch. It’s fast, articulate, and surprisingly good at emotional tone—helping you write blog posts, code, or even untangle your thoughts. It handles math, logic, and creative writing smoothly. And with GPT-4o, it can “see” and “hear” with multimodal abilities, making it a bit of a polymath.

Think of it as your smartest friend who’s also great at explaining things and never tired of brainstorming.

Claude Sonnet 4 (Anthropic)

Claude has a more contemplative, almost philosophical vibe. It excels at reading and analyzing long documents—like if you dropped a dense white paper on AI ethics or a 200-page novel into the chat, Claude wouldn’t blink. It’s also the most cautious of the bunch—polite, filtered, and often offers “consider all sides” responses.

Picture Claude as the liberal arts professor who brings nuance and humanity into every answer.

Gemini 2.5 Flash (Google DeepMind)

Gemini Flash is the speed demon. Designed for snappy, fast interactions, it often responds faster than the others and does so with decent accuracy. However, it can lack the depth or warmth of ChatGPT or Claude in creative or emotionally nuanced tasks. That said, it plays extremely well with other Google tools (Docs, Sheets, Gmail).

Think of Gemini Flash as your super-efficient assistant—less poetic, more productivity.

Perplexity Standard

This one’s a bit different. Perplexity’s model is all about real-time search. Instead of generating answers from a fixed knowledge base, it fetches current information from the internet and cites it directly. It’s like having a search engine with a conversational front end. Great for up-to-the-minute answers.

Perplexity is the librarian who sprints across the stacks and brings you five books and a few recent newspaper articles—fast.

Speed and Responsiveness

  • Gemini Flash is fastest, no contest.
  • ChatGPT-4o is now extremely quick, even when juggling images or data.
  • Claude Sonnet 4 is fast, but thoughtful—it sometimes pauses as if it’s genuinely mulling over your question.
  • Perplexity is variable: fast for basic questions, but might take a few seconds for deeper searches.

Creativity and Writing

  • ChatGPT-4o is the most versatile: it can write poetry, comedy, scripts, and user-friendly code.
  • Claude brings literary depth. If you want something elegant or soulful, it might even outperform ChatGPT.
  • Gemini Flash is fine for outlines and quick drafts but lacks flair.
  • Perplexity isn’t built for creativity—it’s more like an AI researcher than a storyteller.

Real-Time Knowledge

This is where Perplexity shines. It actively searches the web in real time and shows sources. If you’re looking for the latest news, product comparisons, or niche data, it’s the one to use.

  • ChatGPT-4o has web browsing in the Pro plan (but slower and more structured).
  • Claude Sonnet 4 does not browse (as of now).
  • Gemini Flash is connected to Google Search but isn’t as transparent with citations as Perplexity.

Use Case Match-Ups

Use CaseBest AI Choice
Writing a blog postChatGPT-4o or Claude 4
Answering real-time newsPerplexity Standard
Brainstorming a projectChatGPT-4o
Deep document analysisClaude Sonnet 4
Quick answers + integrationGemini 2.5 Flash
Coding helpChatGPT-4o
Search with citationsPerplexity Standard
Creating visual or voice contentChatGPT-4o

My Personal Taste

There’s no single winner here—it’s more like a toolkit:

  • ChatGPT-4o: Best all-around. If you want one AI to rule them all, this is it.
  • Claude Sonnet 4: Best for nuanced thought, long texts, and emotionally intelligent writing.
  • Gemini 2.5 Flash: Best if you’re deep in the Googleverse and want speed above all.
  • Perplexity Standard: Best for up-to-date facts, live research, and citations.

Personally? I keep ChatGPT-4o as my default, but reach for Perplexity when I need to check the latest headlines or product specs, and use Claude when I want a second opinion that sounds like it came from an AI who’s read more Russian literature than I have.

In the end, the best AI isn’t the smartest—it’s the one that matches how you think, create, and explore.

AI Garden Planning

AI Garden Planning

That awkward strip of land on the side of your house? It’s got serious potential. Whether it’s sun-drenched or shady, wild or weedy, you can turn it into a thriving little oasis—with some help from AI.

Using AI to plan and visualize your garden isn’t just for tech geeks. It’s for anyone who wants to skip the overwhelm and start digging with confidence. Let’s walk through how to design a side garden the smart way—guided by your creativity and assisted by AI.

Why Use AI for Garden Planning?

Good gardening is part art, part science. AI helps fill in the science-y bits so you can focus on the fun stuff—like choosing your color palette, imagining how it will smell in summer, or figuring out how to cram one more tomato plant into a too-small bed.

With just a few prompts, AI can help you:

  • Pick plants that match your space and climate
  • Design a layout that maximizes sun, airflow, and beauty
  • Generate images to visualize the finished garden
  • Create a planting and maintenance calendar
  • Troubleshoot problems as your garden grows

Step 1: Describe Your Garden Space to AI

Start with ChatGPT or your preferred AI assistant. Give it a simple description of your space, like:

Help me plan a small side garden that’s about 3 feet wide and 12 feet long. It gets 5–6 hours of sun in the afternoon. I live in San Luis Obispo, and I’d like mostly low-maintenance plants that attract bees and butterflies.

Within seconds, you’ll have a suggested list of plants, ideas for layout, and maybe even soil tips. You can refine your request as much as you want: add a color theme, focus on edibles, or ask for deer-resistant options. AI won’t get tired of your follow-up questions.

Step 2: Generate a Visual of Your Future Garden

Once you have a general idea of what you want to grow, you can create an image of your imagined garden using AI tools — and yes, ChatGPT is one of them.

If you’re using ChatGPT with image generation (like the Plus plan with DALL·E built in), you can simply type something like:

Create an image of a narrow side yard garden with raised beds, blooming lavender and salvia, and a stepping stone path. There’s a wooden fence on one side and sunlight coming in from the left. Use a 3:2 aspect ratio.

ChatGPT will generate a custom image for you right in the chat. You can tweak the prompt until it matches your vision—add vertical elements, more color, or change the season.

If you don’t have image generation in ChatGPT, you can copy your prompt into tools like DALL·E (via Bing), or Midjourney. The goal is the same: bring your garden idea to life before you touch a trowel.

This is like having a virtual sketchpad for your green dreams—instantly adjustable and surprisingly inspiring.

Step 3: Let AI Help with Layout and Spacing

Spacing is one of the trickiest parts of planning a small garden. AI can help you figure out how to avoid crowding while still packing in the plants. Ask it:

How far apart should I plant lavender, yarrow, and oregano if I want a natural, cottage-style look?

It can even suggest companion plants or warn you about species that don’t play well together. You can also ask it to generate a simple grid-style layout to print or sketch onto your site plan.

Step 4: Build a Planting Calendar

Want to know when to plant your seeds or transplant your starts? AI can help create a customized calendar based on your location and plant list. Try:

Give me a monthly planting and maintenance schedule for a pollinator-friendly side garden in USDA Zone 9b.

You’ll get a timeline for planting, pruning, fertilizing, and even harvesting—without digging through a dozen different websites.

Step 5: Add a Bit of Tech to Your Soil

If you enjoy the techy side of things, consider adding a few smart tools to your garden setup:

  • Soil moisture sensors that sync with your phone
  • Smart irrigation timers with weather-based scheduling
  • AR plant ID apps to identify mystery weeds or track blooms

It’s a low-effort way to stay connected to your garden, even when life gets busy.

AI Empowers Gardeners

Planning a garden used to mean flipping through books, sketching diagrams, and hoping your choices would work. With AI, you can test ideas, refine plans, and even dream up alternate designs before committing a single seed to the soil.

And once it’s planted? You’ll still be the one watering, weeding, and pausing to watch the bees. But now you’ll know that every plant earned its spot—and your side yard won’t feel like a leftover space anymore.