For a few years now, NVIDIA has been offering their line of Jetson embedded system kits. Originally launched using Tegra K1 in 2014, the first Jetson was designed to be a dev kit for groups looking to build their own Tegra-based devices from scratch. Instead, what NVIDIA surprisingly found, was that groups would use the Jetson board as-is instead and build their devices around that. This unexpected market led NVIDIA to pivot a bit on what Jetson would be, resulting in the second-generation Jetson TX1, a proper embedded system board that can be used for both development purposes and production devices.

This relaunched Jetson came at an interesting time for NVIDIA, which was right when their fortunes in neural networking/deep learning took off in earnest. Though the Jetson TX1 and underlying Tegra X1 SoC lack the power needed for high-performance use cases – these are after all based on an SoC designed for mobile applications – they have enough power for lower-performance inferencing. As a result, the Jetson TX1 has become an important part of NVIDIA’s neural networking triad, offering their GPU architecture and its various benefits for devices doing inferencing at the “edge” of a system.

Now about a year and a half after the launch of the Jetson TX1, NVIDIA is going to be giving the Jetson platform a significant update in the form of the Jetson TX2. This updated Jetson is not as radical a change as the TX1 before it was – NVIDIA seems to have found a good place in terms of form factor and the platform’s core feature set – but NVIDIA is looking to take what worked with TX1 and further ramp up the performance of the platform.

The big change here is the upgrade to NVIDIA’s newest-generation Parker SoC. While Parker never made it into third-party mobile designs, NVIDIA has been leveraging it internally for the Drive system and other projects, and now it will finally become the heart of the Jetson platform as well. Relative to the Tegra X1 in the previous Jetson, Parker is a bigger and better version of the SoC. The GPU architecture is upgraded to NVIDIA’s latest-generation Pascal architecture, and on the CPU side NVIDIA adds a pair of Denver 2 CPU cores to the existing quad-core Cortex-A57 cluster. Equally important, Parker finally goes back to a 128-bit memory bus, greatly boosting the memory bandwidth available to the SoC. The resulting SoC is fabbed on TSMC’s 16nm FinFET process, giving NVIDIA a much-welcomed improvement in power efficiency.

Paired with Parker on the Jetson TX2 as supporting hardware is 8GB of LPDDR4-3733 DRAM, a 32GB eMMC flash module, a 2x2 802.11ac + Bluetooth wireless radio, and a Gigabit Ethernet controller. The resulting board is still 50mm x 87mm in size, with NVIDIA intending it to be drop-in compatible with Jetson TX1.

Given these upgrades to the core hardware, unsurprisingly NVIDIA’s primary marketing angle with the Jetson TX2 is on its performance relative to the TX1. In a bit of a departure from the TX1, NVIDIA is canonizing two performance modes on the TX2: Max-Q and Max-P. Max-Q is the company’s name for TX2’s energy efficiency mode; at 7.5W, this mode clocks the Parker SoC for efficiency over performance – essentially placing it right before the bend in the power/performance curve – with NVIDIA claiming that this mode offers 2x the energy efficiency of the Jetson TX1. In this mode, TX2 should have similar performance to TX1 in the latter's max performance mode.

Meanwhile the board’s Max-P mode is its maximum performance mode. In this mode NVIDIA sets the board TDP to 15W, allowing the TX2 to hit higher performance at the cost of some energy efficiency. NVIDIA claims that Max-P offers up to 2x the performance of the Jetson TX1, though as GPU clockspeeds aren't double TX1's, it's going to be a bit more sensitive on an application-by-application basis.

NVIDIA Jetson TX2 Performance Modes
  Max-Q Max-P Max Clocks
GPU Frequency 854MHz 1122MHz 1302MHz
Cortex-A57 Frequency 1.2GHz Stand-Alone: 2GHz
w/Denver: 1.4GHz
2GHz+
Denver 2 Frequency N/A Stand-Alone: 2GHz
w/A57: 1.4GHz
2GHz
TDP 7.5W 15W N/A

In terms of clockspeeds, NVIDIA has disclosed that in Max-Q mode, the GPU is clocked at 854MHz while the Cortex-A57 cluster is at 1.2GHz. Going to Max-P increases the GPU clockspeed further to 1122MHz, and allows for multiple CPU options; either the Cortex-A57 cluster or Denver 2 cluster can be run at 2GHz, or both can be run at 1.4GHz. Though when it comes to all-out performance, even Max-P mode is below the TX2's limits; the GPU clock can top out at just over 1300MHz and CPU clocks can reach 2GHz or better. Power states are configurable, so customers can dial in the TDPs and desired clockspeeds they want, however NVIDIA notes that using the maximum clocks goes further outside of the Parker SoC’s efficiency range.

Finally, along with announcing the Jetson TX2 module itself, NVIDIA is also announcing a Jetson TX2 development kit. The dev kit will actually ship first – it ships next week in the US and Europe, with other regions in April – and contains a TX2 module along with a carrier board to provide I/O breakout and interfaces to various features such as the USB, HDMI, and Ethernet. Judging from the pictures NVIDIA has sent over, the TX2 carrier board is very similar (if not identical) to the TX1 carrier board, so like the TX2 itself is should be familiar to existing Jetson developers.

With the dev kit leading the charge for Jetson TX2, NVIDIA will be selling it for $599 retail/$299 education, the same price the Jetson TX1 dev kit launched at back in 2015. Meanwhile the stand-alone Jetson TX2 module will be arriving in Q2’17, priced at $399 in 1K unit quantities. In the case of the module, this means prices have gone up a bit since the last generation; the TX2 is hitting the market at $100 higher than where the TX1 launched.

Source: NVIDIA

POST A COMMENT

59 Comments

View All Comments

  • willis936 - Wednesday, March 8, 2017 - link

    Or, you know, they'd like to see many companies succeed so technologies are more competetive. If no one cared then no one would be reading a pointless article on a pointless site. Reply
  • ddriver - Wednesday, March 8, 2017 - link

    This is not a consumer product. It is not the kind of thing you "want one from". I could put this this to a good use, and buy tens of thousands, had it been worth the money. At this price it is not.

    Once again, what's more puzzling here is why people like you defend the ridiculous pricing with such dedication. And sure, some big OEMs with massively overpriced products could afford it. A billionaire could afford to blow his nose on 100 dollar bills, but do the do that?
    Reply
  • Yojimbo - Wednesday, March 8, 2017 - link

    ""Multi-billion" R&D costs are covered by selling a lot of them chips at reasonable margins, not by selling a few but ridiculously overpriced. And if it really did cost "multi-billion" to R&D such products, they nvidia would be bankrupting itself as it hasn't got anywhere near that amount of revenues from automotive since they got into, and it would eat most of their revenues from other market niches."

    The multi-billion dollars spans all their products. Though there are some additional hardware and firmware costs associated specifically with Tegra and specifically with Jetson, and there is similarly software development that covers Jetson but not, for instance, gaming.

    "And that's the beauty of IP - once you do the design, you can churn millions upon millions of products based on it, which reduces the R&D cost per product to almost negligible."

    It's not even close to almost negligible! You can't just conveniently divide their costs the way you see fit for your rant of the day.

    "15$ for the chip, 30$ for the module, 60$ for the dev board. Those would be realistic production costs, I mean how much it costs them to make it. What justifies asking 10 times more than its worth?"

    The price of the board isn't based on cost. The cost limits how low of a price they can charge but it doesn't set the price. The price is based on value. NVIDIA is able to charge the money it can charge because of the value their product provides. That's how money is made, by providing a good or service with a value that is greater than the cost to provide the good or service. The fact that you think that the cost to produce the board (whether you figure in the r&d and sg&a costs or not) is what the board is "worth" shows that you don't understand the very basics of business or economics.

    "Quite frankly, while it is understandable why I would criticize the pricing, not only from my perspective, but also from the perspective that they wouldn't be able to win many designs at that cost, it is quite curious why individuals like you rush so desperately to defend it? Why are people like you acting as if I have said something bad about your mommas?"

    This is a public message board and when we see you say something stupid we point it out. And the things you have said are quite stupid in a basic economic sense. Don't get mad about it. Educate yourself. As far as getting more design wins, it's interesting (i.e. laughable) that you think you can judge the supply/demand curve of the developer board market better than the company themselves can. But from the things you've said you most likely don't have any idea of what a supply/demand curve is or what it means. You're just basing your judgments on thin air. If you think a product is worth its cost to produce then a supply/demand curve is useless to you. Of course if you were making the board I hope eating would be useless to you as well, as well, but maybe you'd add eating cost into the worth of the product. If you wanted a shiny new Ferrari I guess you'd have to add that into the developer board's worth, too.
    Reply
  • Itselectric - Wednesday, March 8, 2017 - link

    You are wrong, they're not saying that they should sell at cost, but that with R&D included their cost is maybe 120 all in per dev kit. When they ask 600 for it, that's grossly overpriced and their marginal cost will not be equal to their marginal revenue. Reply
  • Meteor2 - Wednesday, March 8, 2017 - link

    I still don't get why anyone cares. Ferraris are expensive. Rolexes are expensive. Boeings are expensive. It happens. Reply
  • I'msureyouareatroll - Wednesday, March 8, 2017 - link

    Wrong. That's branding and it's specifically designed to target different users/audiences. I know nothing about you but, assuming you are a mortal as I am, these things weren't designed to target you or me. The massive dev board of the TX1/2 is completely useless, if you have to install any peripheral via usb, like a camera, you end up with just one spare micro-usb port, which means you need to buy an extra usb hub in order to connect mouse-keyboard as a minimum. You then have to invest another 300-500 € to buy a dedicated carrier board if you want to take full advantage of the jetson TX1 size. If you don't believe me, go and check the prices of carrier boards sold by auvidea or connecttech.
    On top of this, support is virtually nonexistent, and the community is virtually no one. You are left hanging with all sort of questions for days/months, the so-called advantages of, for example, connecting several cameras to the Jetson is virtually a wet dream: there are no drivers available and if you want the only ones in existence you've got to pay another €2500.
    If you compare this thing with a widely use Raspberry pi in terms of community/support and even hardware usability, I can take this thing will never take off.
    Reply
  • jospoortvliet - Thursday, March 9, 2017 - link

    Dude, what are you even talking about? What is 2.5k if you are designing a car and need a hand full of these devices boards? These prices are perfectly fine for the niche this board is aiming at. It isn't aiming at home users playing with raspberry pi boards... Reply
  • Yojimbo - Wednesday, March 8, 2017 - link

    No, he specifically said $15 dollars to produce. And then he said it was worth what it costs, so obviously he wants it to be sold at cost. Reply
  • ddriver - Wednesday, March 8, 2017 - link

    Oh so you are a stupidity pointer out, are you. That's quite amusing, considering your post is riddled with stupidity and general lack of understanding of the subject or even common sense. Unlike you I have many years of experience in that field, I know how much stuff costs, and how much stuff is worth.

    nvidia is pretending to be intel of lately, they act all high and mighty and expect ridiculous margins. Which is why their mobile platforms are nowhere to be seen. They are aiming to milk one particular cow they know is easiest to milk, but they are not in the position to do so. This product in particular is for the time being unique, but that wont last long. We are few months away zen apus which would make this product irrelevant, at least at that price. With the added benefit you don't get locked into cuda, but get to use modern OpenCL which runs on a variety of other platforms, and can even be compiled to verilog and put directly into fpgas or even silicon.

    Sure, they are making some money on automotive, but I highly doubt the actual reason for this is their products are worth the money, corporate interests and politics are at play there. Not logic, not reason, not the consumer's best interest.
    Reply
  • Yojimbo - Wednesday, March 8, 2017 - link

    "That's quite amusing, considering your post is riddled with stupidity and general lack of understanding of the subject or even common sense. "

    OK. show me what I said that shows lack of understanding or common sense.

    "Unlike you I have many years of experience in that field, I know how much stuff costs, and how much stuff is worth."

    Saying it's so doesn't make it true.

    "We are few months away zen apus which would make this product irrelevant, at least at that price. With the added benefit you don't get locked into cuda, but get to use modern OpenCL which runs on a variety of other platforms, and can even be compiled to verilog and put directly into fpgas or even silicon."

    Yeah sure. Let's wait a couple years and see how that works out. Using OpenCL might be great if there were both strong hardware and strong libraries for it. Even if Zen APUs are competitive, AMD is going to leave it up to everyone else to create the development tools. It's something that takes years to do.

    "Sure, they are making some money on automotive, but I highly doubt the actual reason for this is their products are worth the money, corporate interests and politics are at play there. Not logic, not reason, not the consumer's best interest."

    Yup, everyone is stupid except you. NVIDIA, their customers... You just aren't able to say why. Must be some big conspiracy. If only the world wasn't corrupt you'd be a multi-trillionaire by now.
    Reply

Log in

Don't have an account? Sign up now