Skip to main content

Future Documentation

2 months 2 weeks ago

Dotan was digging through vendor supplied documentation to understand how to use an API. To his delight, he found a specific function which solved exactly the problem he had, complete with examples of how it was to be used. Fantastic!

He copied one of the examples, and hit compile, and reviewed the list of errors. Mostly, the errors were around "the function you're calling doesn't exist". He went back to the documentation, checked it, went back to the code, didn't find any mistakes, and scratched his head.

Now, it's worth noting the route Dotan took to find the function. He navigated there from a different documentation page, which sent him to an anchor in the middle of a larger documentation page- vendorsite.com/docs/product/specific-api#specific-function.

This meant that as the page loaded, his browser scrolled directly down to the specific-function section of the page. Thus, Dotan missed the gigantic banner at the top of the page for that API, which said this:

/!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!
This doc was written to help flesh out a user API. The features described here are all hypothetical and do not actually exist yet, don't assume anything you see on this page works in any version /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\ NOTE /!\

On one hand, I think providing this kind of documentation is invaluable, both to your end users and for your own development team. It's a great roadmap, a "documentation driven development" process. And I can see that they made an attempt to be extremely clear about it being incomplete and unimplemented- but they didn't think about how people actually used their documentation site. A banner at the top of the page only works if you read the page from top to bottom, but documentation pages you will frequently skip to specific sections of the page.

But there was a deeper issue with the way this particular approach was executed: while the page announced that one shouldn't assume anything works, many of the functions on the page did work. Many did not. There was no rhyme or reason, to version information or other indicators to help a developer understand what was and was not actually implemented.

So while the idea of a documentation-oriented roadmap specifying features that are coming is good, the execution here verged into WTF territory. It was a roadmap, but with all the landmarks erased, so you had no idea where you actually were along the length of that road. And the one warning sign that would help you was hidden behind a bush.

Dotan asks: "WTF is that page doing on the official documentation wiki?"

And I'd say, I understand why it's there, but boy it should have been more clear about what it actually was.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.
Remy Porter

AMD Is Coiled To Hockey Stick In The AI Datacenter

2 months 2 weeks ago

If all goes according to the high end of plan, then AMD should kiss $10 billion in revenues in the fourth quarter of this year, and if it was low-balling that number a little, then it should break through $10 billion and put the wrap on a $34.3 billion year that was its best year ever and its most profitable one in terms of absolute dollars and one of its better ones for net income as a share of revenue. …

AMD Is Coiled To Hockey Stick In The AI Datacenter was written by Timothy Prickett Morgan at The Next Platform.

Timothy Prickett Morgan

Google's New Hurricane Model Was Breathtakingly Good This Season

2 months 2 weeks ago
An anonymous reader quotes a report from Ars Technica: Although Google DeepMind's Weather Lab only started releasing cyclone track forecasts in June, the company's AI forecasting service performed exceptionally well. By contrast, the Global Forecast System model, operated by the US National Weather Service and is based on traditional physics and runs on powerful supercomputers, performed abysmally. The official data comparing forecast model performance will not be published by the National Hurricane Center for a few months. However, Brian McNoldy, a senior researcher at the University of Miami, has already done some preliminary number crunching. The results are stunning: A little help in reading the graphic is in order. This chart sums up the track forecast accuracy for all 13 named storms in the Atlantic Basin this season, measuring the mean position error at various hours in the forecast, from 0 to 120 hours (five days). On this chart, the lower a line is, the better a model has performed. The dotted black line shows the average forecast error for official forecasts from the 2022 to 2024 seasons. What jumps out is that the United States' premier global model, the GFS (denoted here as AVNI), is by far the worst-performing model. Meanwhile, at the bottom of the chart, in maroon, is the Google DeepMind model (GDMI), performing the best at nearly all forecast hours. The difference in errors between the US GFS model and Google's DeepMind is remarkable. At five days, the Google forecast had an error of 165 nautical miles compared to 360 nautical miles for the GFS model, more than twice as bad. This is the kind of error that causes forecasters to completely disregard one model in favor of another. But there's more. Google's model was so good that it regularly beat the official forecast from the National Hurricane Center (OFCL), which is produced by human experts looking at a broad array of model data. The AI-based model also beat highly regarded "consensus models," including the TVCN and HCCA products. For more information on various models and their designations, see here.

Read more of this story at Slashdot.

BeauHD