Skip to main content

GM Says New Battery Chemistry Will Enable 400-Mile Range EVs

2 months ago
General Motors is partnering with LG to develop lithium manganese-rich (LMR) batteries, which are safer, denser, and cheaper than current EV battery tech. The automaker aims to begin U.S. production by 2028 and become the first to deploy LMR cells in electric vehicles. Ford also announced it would start adopting LMR batteries for its EVs, but not until 2030. The Verge reports: GM's current crop of electric Chevys and Cadillacs use high-nickel batteries, which supply enough energy for around 300-320 miles of range. The new LMR batteries are denser, with greater space efficiency due to their prismatic shape, enabling up to 400 miles of range, GM says. Prismatic cells are packed flat in rigid cases and are generally thought to be less complex to manufacture than cylindrical cells. Less complexity and cheaper materials will hopefully lead to lower-cost EVs, which has been a significant challenge for the auto industry's shift to electric vehicles. "The EV growth rate is really dependent on how quickly we can bring the costs down over time," says GM's VP for batteries Kurt Kelty. "And this is the biggest lever we have. Batteries make up roughly 30 to 40 percent of the cost of vehicles. And if you can drop that down significantly like we're doing here, then it ends up being a lower cost to the consumer."

Read more of this story at Slashdot.

BeauHD

CodeSOD: itouhhh…

2 months ago

Frequently in programming, we can make a tradeoff: use less (or more) CPU in exchange for using more (or less) memory. Lookup tables are a great example: use a big pile of memory to turn complicated calculations into O(1) operations.

So, for example, implementing itoa, the C library function for turning an integer into a character array (aka, a string), you could maybe make it more efficient using a lookup table.

I say "maybe", because Helen inherited some C code that, well, even if it were more efficient, it doesn't help because it's wrong.

Let's start with the lookup table:

char an[1000][3] = { {'0','0','0'},{'0','0','1'},{'0','0','2'},{'0','0','3'},{'0','0','4'},{'0','0','5'},{'0','0','6'},{'0','0','7'},{'0','0','8'},{'0','0','9'}, {'0','1','0'},{'0','1','1'},{'0','1','2'},{'0','1','3'},{'0','1','4'},{'0','1','5'},{'0','1','6'},{'0','1','7'},{'0','1','8'},{'0','1','9'}, …

I'm abbreviating the lookup table for now. This lookup table is meant to be use to convert every number from 0…999 into a string representation.

Let's take a look at how it's used.

int ll = f->cfg.len_len; long dl = f->data_len; // Prepare length if ( NULL == dst ) { dst_len = f->data_len + ll + 1 ; dst = (char*) malloc ( dst_len ); } else //if( dst_len < ll + dl ) if( dst_len < (unsigned) (ll + dl) ) { // TO DOO - error should be processed break; } long i2; switch ( f->cfg.len_fmt) { case ASCII_FORM: { if ( ll < 2 ) { dst[0]=an[dl][2]; } else if ( ll < 3 ) { dst[0]=an[dl][1]; dst[1]=an[dl][2]; } else if ( ll < 4 ) { dst[0]=an[dl][0]; dst[1]=an[dl][1]; dst[2]=an[dl][2]; } else if ( ll < 5 ) { i2 = dl / 1000; dst[0]=an[i2][2]; i2 = dl % 1000; dst[3]=an[i2][2]; dst[2]=an[i2][1]; dst[1]=an[i2][0]; } else if ( ll < 6 ) { i2 = dl / 1000; dst[0]=an[i2][1]; dst[1]=an[i2][2]; i2 = dl % 1000; dst[4]=an[i2][2]; dst[3]=an[i2][1]; dst[2]=an[i2][0]; } else { // General case for ( int k = ll ; k > 0 ; k-- ) { dst[k-1] ='0' + dl % 10; dl/=10; } } dst[dl]=0; break; } }

Okay, we start with some reasonable bounds checking. I have no idea what to make of a struct member called len_len- the length of the length? I'm lacking some context here.

Then we get into the switch statement. For all values less than 4 digits, everything makes sense, more or less. I'm not sure what the point of using a 2D array for you lookup table is if you're also copying one character at a time, but for such a small number of copies I'm sure it's fine.

But then we get into the len_lens longer than 3, and we start dividing by 1000 so that our lookup table continues to work. Which, again, I guess is fine, but I'm still left wondering why we're doing this, why this specific chain of optimizations is what we need to do. And frankly, why we couldn't just use itoa or a similar library function which already does this and is probably more optimized than anything I'm going to write.

When we have an output longer than 5 characters, we just use a naive for-loop and some modulus as our "general" case.

So no, I don't like this code. It reeks of premature optimization, and it also has the vibe of someone starting to optimize without fully understanding the problem they were optimizing, and trying to change course midstream without changing their solution.

But there's a punchline to all of this. Because, you see, I skipped most of the lookup table. Would you like to see how it ends? Of course you do:

{'9','8','0'},{'9','8','1'},{'9','8','2'},{'9','8','3'},{'9','8','4'},{'9','8','5'},{'9','8','6'},{'9','8','7'},{'9','8','8'},{'9','8','9'} };

The lookup table doesn't work for values from 990 to 999. There are just no entries there. All this effort to optimize converting integers to text and we end up here: with a function that doesn't work for 1% of the possible values it could receive. And, given that the result is an out-of-bounds array access, it fails with everyone's favorite problem: undefined behavior. Usually it'll segfault, but who knows! Maybe it returns whatever bytes it finds? Maybe it sends the nasal demons after you. The compiler is allowed to do anything.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.
Remy Porter