002: What makes a novel interface?

This is an internal monthly newsletter I write for GenAI at Canva. To share publicly, I strip out internal research shared and references to current projects.


My OG DK Pocket

As a child, I devoured visual encyclopedias. I had almost every DK Pocket imaginable, with my favorite being the one on science and space. The cross-sections of planets, diagrams explaining how gravity worked, and artist renditions of unknown worlds fascinated me. The vastness of space and the endless possibilities have always been my biggest inspiration.

I quickly ran out of real-world encyclopedias, or rather there weren’t enough new space discoveries to print new editions. So I turned to the world of sci-fi, specifically the Star Wars visual encyclopedias with their intricate starship cross-sections.

I always wondered how someone came up with this. How did they not only imagine a future that doesn't exist (and in Star Wars, it's technically the past) but also create logic and reasoning behind it? Each ship, each illustration, has thought and care put into where the fuel line is, how it actually flies, and where a person comfortably sits.

This meticulous attention to detail isn't unique to Star Wars but is seen throughout science fiction. The futures envisioned in these stories, whether utopian or dystopian, are shaped by the realities people live in. Cyberpunk didn’t appear out of thin air, it was imagining what the future would be like as technology rapidly advanced and societal ideals shifted. A key part of storytelling in science fiction is showing how humans interact with technology, whether through cybernetic implants, voice-activated starships, or even robot companions.

In our daily lives, we often feel bound by the limits of current technology and understanding. We know how difficult something can be to build, we know it’s hard to change old habits. Often, we stop ourselves before we even start. Yet, sci-fi offers a crucial lesson: the future is not fixed, and possibilities are endless. Our only true limit is ourselves, both in creating new ideas and in understanding them. We shouldn’t stop dreaming just because we know it will be hard today.

The next stage of creating novel interfaces, though is bridging the gap between the imagined and the real. How does this dream become a reality and where do we adjust? Without a pathway to adoption, even the most novel ideas can falter. Human-Computer Interaction (HCI) research has been tackling this very question for decades—experimenting, prototyping, and understanding how to bring imaginative ideas to life.

In 2017, researchers explored a tactile mobile device screen. In 2024, they examined bridging the gaps between robotics and textiles. Recently, John Milinovich has written about generative interfaces becoming the next frontier, offering real-time interaction and feedback for the user.

How do we help users bridge the gap between what they know today and what they’ll use tomorrow? Lucy Datyner shared an excellent paper, Dynamics, Multiplicity and Conceptual Blends in HCI, that explores this very topic. The researchers delve into how people find digital interfaces intuitive by blending real-world and digital experiences. Think of the skeuomorphic design in the first iPhone, which helped users make connections. This is what we need to explore next: intentionally creating blends between new and old to help users learn and adapt.

What could the next novel interface look like? That’s for us to imagine. ✌️


🤖 Across AI

  • Canva has acquired Leonardo.Ai 🎉

  • AI is making its mark on the 2024 Olympics in multiple ways. Athletes are leveraging AI to push themselves to new records by scrutinising their biomechanics, nutrition, and training schedules. Referees are also using it to aid them in split-second decision-making. This technology, already in use in football, demands significant physical infrastructure like chips and cameras.

  • A recent study highlights the importance of vocabulary size in the performance of large language models. It found that models with up to 3 billion parameters achieve better results when equipped with proportionally larger vocabularies.

  • After facing backlash on Design Twitter and accusations of copying Apple's design, Figma has pulled its AI tool, Make Designs. Chief Design Technologist Kris Rasmussen explained that the tool used third-party AI models, not ones developed internally. Initially, users feared Figma was using their private designs without consent, but it now appears the designs were sourced from data scraped off the internet by OpenAI and Amazon.

  • Move over, brat summer—it's AI summer. In an essay by Benedict Evans, he discusses the rapid yet short-lived engagement with ChatGPT. Many users tried the tool, but few continue to use it regularly, reflecting historical tech adoption patterns that often take time to integrate into daily life. While large language models (LLMs) might seem like ready-to-use products, they require dedicated research and development to find the right product-market fit and become genuinely useful in practical applications.

  • Can consciousness exist in a computer simulation? Dr. Wanja Wiese explores the conditions for consciousness and two approaches to artificial consciousness. “Replicating” a brain in the current state of technology is far beyond our current capabilities as we lack causal connectivity and energy efficiency.

  • NoMindBhutan is breaking new ground as Bhutan's first AI startup. Founded by college students Ugyen Dendup and Jamphel Yigzin Samdrup, they face significant challenges working in a closed economy with limited digital infrastructure. As AI becomes more accessible to students and entrepreneurs globally, it will be interesting to see how the tech landscape shifts, especially in youth-dominated markets like India, Indonesia, and Brazil.

  • Waleed Kadous has been on the ground of ICML with his highlights: Video is coming. “It’s easy to think that video is only important for generating cute memes, but it’s being painted by its advocates as something more — a way for machines to understand the real world. In much the same way LLMs opened up language, cognition and thought to AI, there is a belief amongst researchers that video models will open up the “real world” to AI.”


🌏 Across the world

  • I'm all about the '70s, and thanks to Ethan Nakache, we have a fresh piece of that era. Inspired by the 1972 typeface Gesh Ortega Roman 275, Nakache has created Goodman. Reviving a typeface is challenging, with crucial decisions on what elements to retain or discard. Goodman has six weights and matching italics. And might I say, looks great on a blanket.

  • How can we bring back the quirky charm of the old internet? In We Need to Rewild the Internet, Maria Farrell and Robin Berjon argue the web has become a sterile monoculture, like industrialised farming, losing its vibrant diversity. They say it’s time to “rewild”, to shake things up by breaking up tech giants and promoting diverse, open-source spaces.

  • I've been playing a lot of Starfield and, naturally, I’ve been diving deep into astronomy. Turns out, physicists have recalculated the timeline for vacuum decay—a quantum event that could theoretically wipe out the universe. This doomsday scenario, once thought to be in the far-off future, might happen 10,000 times sooner than we thought (don't worry, we'll be long gone by then.) This revelation highlights the fragile balance of our cosmic environment and the delicate dance of quantum fields.

    • This sent me down a philosophical rabbit hole. As Céline Henne suggests, how can gaining knowledge be anything other than uncovering what was already there? How can the truth of a statement or theory be anything but its alignment with pre-existing facts?

    • Then I started reading about what we can learn from Venus, which was once potentially Earth-like and how it became inhospitable due to extreme greenhouse effects and volcanic activity. By studying Venus’s transformation, scientists hope to better understand where to look—and where not to look—for life in the cosmos.

  • Visual essay alert 👀 Who Killed the World?: a comprehensive analysis of sci-fi narratives from the 1950s to today reveals a shift from optimistic, problem-solving tales to dystopian warnings. Alvin Chang explores how early sci-fi depicted heroes triumphing over existential threats, while modern stories often reflect societal anxieties and a bleaker outlook on the future.

  • US politics has been crazy y’all. And that’s all I’m gonna say about that.

Previous
Previous

003: Living (and Thriving) with Paradoxes

Next
Next

001: Gathering our flowers