Skip to main content

These Tiny Lasers Are Completely Edible

6 days 15 hours ago
"Scientists have created the first lasers made entirely from edible materials," reports Science magazine "which could someday help monitor and track the properties of foods and medications with sensors that can be harmlessly swallowed." [The researchers' report] shows that tiny droplets of everyday cooking oils can act like echo chambers of light, otherwise known as lasers. By providing the right amount of energy to an atom, the atom's electrons will excite to a higher energy level and then relax, releasing a photon of light in the process. Trap a cloud of atoms in a house of mirrors and blast them with the right amount of energy, and the light emitted by one excited atom will stimulate one of its neighbors, amplifying the atoms' collective glow... [The researchers] shot purple light at droplets of olive oil, whose surfaces can keep photons of light bouncing around, trapping them in the process. This reflected light excited the electrons in the oil's chlorophyll molecules, causing them to emit photons that triggered the glow of other chlorophyll molecules — transforming the droplet into a laser. The energy of the chlorophyll's radiation depends on the oil droplets' size, density, and other properties. The study's authors suggest this sensitivity can be exploited to track different properties of food or pharmaceutical products. When researchers added oil droplets to foods and then measured changes in the laser light the droplets emitted, they could reliably infer the foods' sugar concentration, acidity, exposure to high temperatures, and growth of microorganisms. They also used the lasers to encode information, with droplets of different diameters functioning like the lines of a barcode. By mixing in sunflower oil droplets of seven specific sizes — all less than 100 microns wide — the researchers encoded a date directly into peach compote: 26 April, 2017, the first international Stop Food Waste Day. Thanks to long-time Slashdot reader sciencehabit for sharing the news.

Read more of this story at Slashdot.

EditorDavid

Diffusion + Coding = DiffuCode. How Apple Released a Weirdly Interesting Coding Language Model

6 days 16 hours ago
"Apple quietly dropped a new AI model on Hugging Face with an interesting twist," writes 9to5Mac. "Instead of writing code like traditional LLMs generate text (left to right, top to bottom), it can also write out of order, and improve multiple chunks at once." "The result is faster code generation, at a performance that rivals top open-source coding models." Traditionally, most LLMs have been autoregressive. This means that when you ask them something, they process your entire question, predict the first token of the answer, reprocess the entire question with the first token, predict the second token, and so on. This makes them generate text like most of us read: left to right, top to bottom... An alternative to autoregressive models is diffusion models, which have been more often used by image models like Stable Diffusion. In a nutshell, the model starts with a fuzzy, noisy image, and it iteratively removes the noise while keeping the user request in mind, steering it towards something that looks more and more like what the user requested... Lately, some large language models have looked to the diffusion architecture to generate text, and the results have been pretty promising... This behavior is especially useful for programming, where global structure matters more than linear token prediction... [Apple] released an open-source model called DiffuCode-7B-cpGRPO, that builds on top of a paper called DiffuCoder: Understanding and Improving Masked Diffusion Models for Code Generation, released just last month... [W]ith an extra training step called coupled-GRPO, it learned to generate higher-quality code with fewer passes. The result? Code that's faster to generate, globally coherent, and competitive with some of the best open-source programming models out there. Even more interestingly, Apple's model is built on top of Qwen2.5-7B, an open-source foundation model from Alibaba. Alibaba first fine-tuned that model for better code generation (as Qwen2.5-Coder-7B), then Apple took it and made its own adjustments. They turned it into a new model with a diffusion-based decoder, as described in the DiffuCoder paper, and then adjusted it again to better follow instructions. Once that was done, they trained yet another version of it using more than 20,000 carefully picked coding examples. "Although DiffuCoder did better than many diffusion-based coding models (and that was before the 4.4% bump from DiffuCoder-7B-cpGRPO), it still doesn't quite reach the level of GPT-4 or Gemini Diffusion..." the article points out. But "the bigger point is this: little by little, Apple has been laying the groundwork for its generative AI efforts with some pretty interesting and novel ideas."

Read more of this story at Slashdot.

EditorDavid