Personal computing discussed

Moderators: morphine, SecretSquirrel

 
chuckula
Gold subscriber
Gerbil Jedi
Topic Author
Posts: 1890
Joined: Wed Jan 23, 2008 9:18 pm
Location: Probably where I don't belong.

PhysX Just Went Open Source

Mon Dec 03, 2018 9:39 am

Not April Fool's: https://www.phoronix.com/scan.php?page= ... urce-PhysX

As usual: OMG THANKS AMD!
4770K @ 4.7 GHz; 32GB DDR3-2133; GTX-1080 sold and back to hipster IGP!; 512GB 840 Pro (2x); Fractal Define XL-R2; NZXT Kraken-X60
--Many thanks to the TR Forum for advice in getting it built.
 
NovusBogus
Silver subscriber
Graphmaster Gerbil
Posts: 1314
Joined: Sun Jan 06, 2013 12:37 am

Re: PhysX Just Went Open Source

Mon Dec 03, 2018 10:56 am

BSD license? Holy smoke.

Guess they've finally acknowledged that PhysX didn't catch on as a gaming technology and are hoping that someone else can make better use of it.
 
LostCat
Minister of Gerbil Affairs
Posts: 2005
Joined: Thu Aug 26, 2004 6:18 am
Location: Alphanumeric symbols.

Re: PhysX Just Went Open Source

Mon Dec 03, 2018 11:42 am

chuckula wrote:
Not April Fool's: https://www.phoronix.com/scan.php?page= ... urce-PhysX

As usual: OMG THANKS AMD!

I mean you say that, but https://gpuopen.com/

...as far as physics engines go MS owns Havok now and I haven't seen all that many PhysX games released anymore so I'm not sure it really makes a difference.
Meow.
 
DragonDaddyBear
Gerbil Elite
Posts: 788
Joined: Fri Jan 30, 2009 8:01 am

Re: PhysX Just Went Open Source

Mon Dec 03, 2018 12:12 pm

I hope that changes now that it's open.

Is this something that developers can just take the existing tool sets out there and use or is there more work to be done?
 
Heiwashin
Maximum Gerbil
Posts: 4579
Joined: Wed Dec 13, 2006 1:21 pm
Location: Denham Springs, LA

Re: PhysX Just Went Open Source

Mon Dec 03, 2018 12:21 pm

Maybe unity can use it to fix their weird physics they can't seem to get right.
Looking for Knowledge wrote:
When drunk.....
I want to have sex, but find I am more likely to be shot down than when I am sober.
 
Concupiscence
Gerbil Elite
Posts: 587
Joined: Tue Sep 25, 2012 7:58 am
Location: Dallas area, Texas, USA

Re: PhysX Just Went Open Source

Mon Dec 03, 2018 1:05 pm

I mean, at the very least it'd be nice to see the CPU-driven portion of PhysX rewritten to use SIMD instead of grody old x87. Whether that's possible without fundamentally changing behavior for applications relying on it remains to be seen.
Workstation: Core i9 7940x, 32 gigs RAM, Geforce GTX 1070 Ti, Windows 10 Pro
Play: Ryzen 7 1700, 16 gigs RAM, Geforce GTX Titan X (Maxwell), Xubuntu 18.04
 
Concupiscence
Gerbil Elite
Posts: 587
Joined: Tue Sep 25, 2012 7:58 am
Location: Dallas area, Texas, USA

Re: PhysX Just Went Open Source

Mon Dec 03, 2018 4:41 pm

nuiiii wrote:
I don`t care much about PhysX much, I play old-school games but sometimes it`s really nice to play something heavy but cool and detalized, what PhysX supporting game would you advise to play?
Probably some interesting projects may appear now that`s PhysX is open


I was pretty impressed by the PhysX effects in Cryostasis. Unreal Tournament 3 and Batman: Arkham Asylum both benefited from it as well. It was never a vital feature, but the right game could take good advantage of it.
Workstation: Core i9 7940x, 32 gigs RAM, Geforce GTX 1070 Ti, Windows 10 Pro
Play: Ryzen 7 1700, 16 gigs RAM, Geforce GTX Titan X (Maxwell), Xubuntu 18.04
 
Ryu Connor
Gold subscriber
Global Moderator
Posts: 4324
Joined: Thu Dec 27, 2001 7:00 pm
Location: Marietta, GA
Contact:

Re: PhysX Just Went Open Source

Thu Dec 06, 2018 6:33 pm

Concupiscence wrote:
I mean, at the very least it'd be nice to see the CPU-driven portion of PhysX rewritten to use SIMD instead of grody old x87. Whether that's possible without fundamentally changing behavior for applications relying on it remains to be seen.


Already is in modern versions.

One of the developers who worked on PhysX during the era it used x87 also has a bone to pick with his code being called unoptimized. His point of view is worth a read.

http://www.codercorner.com/blog/?p=1129

Also worth noting that PhysX became part of the Unreal Engine. It's still used today with UE4. So it's not as dead as people assume.

https://docs.unrealengine.com/en-us/Engine/Physics
https://wiki.unrealengine.com/PhysX,_In ... ur_Project
All of my written content here on TR does not represent or reflect the views of my employer or any reasonable human being. All content and actions are my own.
 
DoomGuy64
Gerbil
Posts: 29
Joined: Mon Jun 08, 2015 4:09 pm

Re: PhysX Just Went Open Source

Sun Dec 09, 2018 3:07 pm

I think the expose article is more worth a read, especially after that developer's POV blog.
https://www.realworldtech.com/physx87/
Several point I think are pertinent.
*PhysX was designed to run on a 32-bit 500 Mhz MIPS CPU, and was single precision by nature. *IMO, the PPU was like having a Raspberry Pi in your PC as a compute off-loader.
*All CPUs being used during the original release period supported SSE2, let alone SSE, while PhysX was single precision code that ran extended precision x87 on CPU.
*They could use packed, single precision SSE for PhysX. Each instruction would execute up to 4 SIMD operations per cycle, rather than just one scalar operation.
*Nvidia had PhysX running on consoles using the AltiVec extensions for PPC, which are very similar to SSE.
*Nvidia has clarified that CPU PhysX is by default single threaded and multi-threading is left to the developer.

Points from the blog article:
*The developer admits merely enabling SSE in the compiler without code optimization would have instantly given 20% performance, then claims there was no point in using it and support would cause compatibility issues. *Just imagine the performance here with multi-threading enabled, even without code optimization.
*The article claims the original version didn't support multi-threading, but I swear multi-threading was working before Nvidia bought out Ageia. I've read some claims saying Ageia CPU physx was faster in some games, simply because those games supported multi-threading. So even if multi-threading wasn't supported early on, it still was eventually supported, but developers were never encouraged to use it.

Other points:
*The only CPUs I can think of not supporting SSE2 was the Athlon XP and Pentium 3, which nobody was using for Vista and DX10 gaming. I think both still supported SSE.
*Nvidia quickly ported it to CUDA, where it not only saw a performance increase, but barely reduced graphics performance, proving the overhead was minimal, and it was easy to optimize.
**The CUDA version did not support ALL of the PhysX effects compared to the PPU, yet still ran fine.
*The console version was optimized and used multi-threading way before the PC.
*The newest version of PhysX supposedly runs fine on CPUs.
*Older versions of PhysX games are not compatible with the new PhysX and require the legacy installer, virtually locking out any performance increases for older games.
*The official excuses for PhsyX performance have all been extremely shady, with points easily made against them. The best one was probably multi-threading being left to developers.
*Since the latest version officially supports SSE2 and multi-threading, it no longer has any point in being a vendor lock-in feature, and has been open sourced to keep developers.

IMO, for open source PhysX to be useful, someone has to backport the optimizations for physx2 compatibility. AFAIK, it's not currently possible to install phsyx3 and run those older games with the speed improvements.
 
Chrispy_
Maximum Gerbil
Posts: 4481
Joined: Fri Apr 09, 2004 3:49 pm
Location: Europe, most frequently London.

Re: PhysX Just Went Open Source

Sun Dec 09, 2018 5:16 pm

Meh, by vendor-locking it all these years it has been abandoned by everyone relevant.

Making it open source is just a way for Nvidia to claw back some goodwill from something that is otherwise obsolete and useless to them.
Congratulations, you've noticed that this year's signature is based on outdated internet memes; CLICK HERE NOW to experience this unforgettable phenomenon. This sentence is just filler and as irrelevant as my signature.
 
synthtel2
Gold subscriber
Gerbil Elite
Posts: 858
Joined: Mon Nov 16, 2015 10:30 am

Re: PhysX Just Went Open Source

Sun Dec 09, 2018 6:02 pm

Unity and Unreal both use PhysX. It's not dead, irrelevant, or anything like that.

Kanter and Terdiman can both be mostly right. Terdiman's explanation scans perfectly pre-Nvidia, not so much by the time Kanter wrote that article.

DoomGuy64 wrote:
*The developer admits merely enabling SSE in the compiler without code optimization would have instantly given 20% performance, then claims there was no point in using it and support would cause compatibility issues.

I think that's the most solid point. I don't doubt that there were some issues with that, but Nvidia has and had the resources to sort that kind of problem out in short order if they cared to, and it seems very unlikely that they couldn't get any serious gains out of SSE without a major rewrite. That said, you're misquoting him - the relevant text is "At the time, with the available compiler, we never saw more than 20% in the very best of case. And most of the time, for actual scenes running in actual games, we saw virtually no gains at all."
 
DoomGuy64
Gerbil
Posts: 29
Joined: Mon Jun 08, 2015 4:09 pm

Re: PhysX Just Went Open Source

Sun Dec 09, 2018 8:20 pm

At the time, with the available compiler, we never saw more than 20% in the very best of case. And most of the time, for actual scenes running in actual games, we saw virtually no gains at all.

See, here's the thing. I don't believe him. In fact, I don't believe anything he's said in general, because most of it has been questionable.
The thing about SSE2 not being mainstream was a lie. Everyone was using at minimum a p4 by the time PhysX came out, while Athlon 64's and even the intel core was out afaik.
Difficulty of coding for SSE seems suspect. I mean, they were smart enough to code for MIPS, which isn't a widely used architecture. SSE was pretty mainstream at the time, and programmers who make claims of difficulty or lack of knowledge are just shady. Hell, why not just compile flag for SSE, or even MMX, since it did improve performance?
The performance of that outdated 500Mhz MIPS chip seems suspect, especially after CUDA gave it a massive performance boost without making games unplayable. The PPU radically slowed games down when used, and while the pcie 1x bus likely was a bottleneck, I think the MIPS chip was also slow.
He mentioned that modern compilers have improved, giving a potential out for performance of the old "unoptimized" code being recompiled on modern compilers.
There never has been any public proof of this mythical old code being compiled for SSE not having real world performance gains. It's all conjecture. We only know that there was a 20% potential gain for unoptimized single threaded physx, and the CPU tested for this is not mentioned. (probably a pentium4 of unknown generation.) Simply put, if an older p4 saw a potential 20% gain, how much would an Athlon 64 have seen? An Athlon 64x2 with multi-threading? I can't trust his excuses, since most of them are void of evidence and laced with contradictory statements, while the PPU was questionably a scam from day one. If it was, there's no way he would fully admit it, and these excuses seem to be just that. Something to justify their business model, which was vendor lock-in from day one. PhysX was double dipping profit from both hardware and software, and that's why Nvidia bought them out. If not for Nvidia, and the later console and CPU updates, it would never have gone anywhere. Once they saw developers ignoring their middleware, being bought out was probably the only plan to stay in business, and it worked.
 
synthtel2
Gold subscriber
Gerbil Elite
Posts: 858
Joined: Mon Nov 16, 2015 10:30 am

Re: PhysX Just Went Open Source

Sun Dec 09, 2018 9:45 pm

DoomGuy64 wrote:
The thing about SSE2 not being mainstream was a lie. Everyone was using at minimum a p4 by the time PhysX came out, while Athlon 64's and even the intel core was out afaik.
The history I can find is surprisingly sparse, but Ageia bought NovodeX in 2004 and it sounds like the development of this tech goes significantly further back than that. Let's say it was mid-2003. AMD hadn't released a CPU supporting SSE2 yet, and a whole lot of Pentium IIIs that didn't support SSE2 were still in circulation. SSE was a lot less obnoxious to use than x87 when it would work, but wasn't really feature-complete. Now say you're a programmer who is already familiar with x87's jankiness, and you don't want the overhead of having multiple ways of doing things just yet. What's the obvious choice here?

DoomGuy64 wrote:
Difficulty of coding for SSE seems suspect. I mean, they were smart enough to code for MIPS, which isn't a widely used architecture. SSE was pretty mainstream at the time, and programmers who make claims of difficulty or lack of knowledge are just shady. Hell, why not just compile flag for SSE, or even MMX, since it did improve performance?
It isn't "can we do this?" so much as "do we have an appropriate business reason to put in the time to do this?" The business reason for MIPS would have been obvious the moment it came up. SSE, less so.

DoomGuy64 wrote:
The performance of that outdated 500Mhz MIPS chip seems suspect, especially after CUDA gave it a massive performance boost without making games unplayable. The PPU radically slowed games down when used, and while the pcie 1x bus likely was a bottleneck, I think the MIPS chip was also slow.
The MIPS chip was definitely slow. It was a lot more parallel than a CPU, of course, but it was on 130nm when 65nm was the usual. There's only so much you can do with that. It did still have a fairly good reason to exist in that it could free up the CPU to do other work. Having more threads than we know what to do with is a recent development.

DoomGuy64 wrote:
He mentioned that modern compilers have improved, giving a potential out for performance of the old "unoptimized" code being recompiled on modern compilers.
There never has been any public proof of this mythical old code being compiled for SSE not having real world performance gains. It's all conjecture. We only know that there was a 20% potential gain for unoptimized single threaded physx, and the CPU tested for this is not mentioned. (probably a pentium4 of unknown generation.) Simply put, if an older p4 saw a potential 20% gain, how much would an Athlon 64 have seen? An Athlon 64x2 with multi-threading?
Back then, auto-vectorizers really were bad. It isn't tough to believe. +20% in some particular test case doesn't necessarily mean you'll see any gain at all in the real world (or even avoid a performance regression).

This is necessarily all conjecture. What kind of proof would you accept that could be reasonably expected to exist? What numbers would you expect to see that he would have no way of fudging if he were so inclined? If you start by assuming he's lying, then of course nothing else he says will be believable.

DoomGuy64 wrote:
I can't trust his excuses, since most of them are void of evidence and laced with contradictory statements, while the PPU was questionably a scam from day one. If it was, there's no way he would fully admit it, and these excuses seem to be just that. Something to justify their business model, which was vendor lock-in from day one.
I don't disagree with most of your conclusions, I just think you can still reach those conclusions while assuming his account of it is true to his own perception.
 
Ryu Connor
Gold subscriber
Global Moderator
Posts: 4324
Joined: Thu Dec 27, 2001 7:00 pm
Location: Marietta, GA
Contact:

Re: PhysX Just Went Open Source

Mon Dec 10, 2018 12:07 am

Doom isn't a programmer, much less a low level one, and it strikes me that he lacks the expertise to add an educated opinion.
All of my written content here on TR does not represent or reflect the views of my employer or any reasonable human being. All content and actions are my own.
 
NoOne ButMe
Gerbil Elite
Posts: 707
Joined: Fri May 15, 2015 9:31 pm

Re: PhysX Just Went Open Source

Mon Dec 10, 2018 12:57 am

One of the developers who worked on PhysX during the era it used x87 also has a bone to pick with his code being called unoptimized. His point of view is worth a read.


If this is talking about dives such as was done at RWT... Uh. The big point (as I understand it) of that was that Nvidia was touting all this extra performance when optimized for the GPU compared to the CPU. When that extra performance would have been delivered by optimizing on the CPU as well.

But maybe there were other articles discussing it.
 
Topinio
Gerbil Jedi
Posts: 1586
Joined: Mon Jan 12, 2015 9:28 am
Location: London

Re: PhysX Just Went Open Source

Mon Dec 10, 2018 4:46 am

NoOne ButMe wrote:
If this is talking about dives such as was done at RWT... Uh. The big point (as I understand it) of that was that Nvidia was touting all this extra performance when optimized for the GPU compared to the CPU. When that extra performance would have been delivered by optimizing on the CPU as well.

But maybe there were other articles discussing it.

NVIDIA didn't not optimize for the CPU side, IIRC the first release after acquisition in 2008 was 2.8 and that included some multithreading work (done under Ageia); later updates to the 2.8 series added the CUDA work, 64b support, and SSE2.

But, of course NVIDIA was going to put work in on the GPU side, its business model is selling GPUs and middleware.

That blog post doesn't mean that more work on CPU optimisation and SIMD couldn't have been done, but coding isn't easy and available time isn't unconstrained. NovodeX was a university spin-out, only existed 2002-2004, and it's a little off to say that the original couple of developers on 0.x or 1.x ought to have learned and implemented SSE when doing so would have delayed 1.0 and/or cost features (and shows that someone saying it wasn't coding at that time!) .

Also, if while at Ageia from 2004-2008 the team had held back feature work and gone all-out to optimise PhysX for CPU, it would probably have made the product less attractive to users, game developers, and NVIDIA.

If NVIDIA had fully optimised it for CPU in 2008-2009, how do you define that please? SSE 4.1, 4.2 and/or 4a? Capable of using 4 threads?

How much would NVIDIA have been slated if version 3.0 had done that, and it worked, and suddendly all the PhysX games ran at good FPS with CUDA PhysX but tanked with an ATI GPU due to CPU load/contention with the 4 PhysX threads on it too?
Desktop: E3-1270 v5, X11SAT-F, 32GB, RX Vega 56, 500GB Crucial P1, 2TB Ultrastar, Xonar DGX, XL2730Z + G2420HDB
HTPC: i5-2500K, DH67GD, 6GB, RX 580, 250GB MX500, 1.5TB Barracuda
Laptop: MacBook6,1
 
NoOne ButMe
Gerbil Elite
Posts: 707
Joined: Fri May 15, 2015 9:31 pm

Re: PhysX Just Went Open Source

Mon Dec 10, 2018 5:41 am

If NVIDIA had fully optimised it for CPU in 2008-2009, how do you define that please? SSE 4.1, 4.2 and/or 4a? Capable of using 4 threads?


No idea, but the x87 used when comparing how much "faster" the GPU was certainly was not optimized.

Without knowing the engineering effort required to optimize for the GPU, it is hard to say anything else.

But generally speaking I would say a CPU optimization which takes about as much time as the GPU optimization did would be fair. More than Nvidia did (aka they did nothing/near nothing), but not a "we optimized every single line of code" scenario (i imagine GPU optimization was not this scenario).
 
Topinio
Gerbil Jedi
Posts: 1586
Joined: Mon Jan 12, 2015 9:28 am
Location: London

Re: PhysX Just Went Open Source

Mon Dec 10, 2018 6:55 am

NoOne ButMe wrote:
No idea, but the x87 used when comparing how much "faster" the GPU was certainly was not optimized.

Without knowing the engineering effort required to optimize for the GPU, it is hard to say anything else.

But generally speaking I would say a CPU optimization which takes about as much time as the GPU optimization did would be fair. More than Nvidia did (aka they did nothing/near nothing), but not a "we optimized every single line of code" scenario (i imagine GPU optimization was not this scenario).

Eh? NVIDIA (and the lead developer, who co-founded NovodeX, took it to buy-out by Ageia, got Ageia bought-out by NVIDIA for his project, and AFAIK is still there at NVIDIA) clearly did not do nothing to optimise it for CPU. It might have been not enough for the liking of people who don't have a NVIDIA GPU, but by definition those people are the non-customers.

Saying they should have devoted a bigger fraction of their work time to optimisation -- instead of features, bugs, porting to new platforms or additional tools for users -- when one was not there, let alone in management and making decisions, is just :roll:

Saying that the GPU company which bought them out should have directed them to spend a fraction of the time which was allocated for optimising and regression testing of optimisations on working on optimising for other companies CPU products instead of their own GPU product is :roll: :roll: from the business' perspective.
Desktop: E3-1270 v5, X11SAT-F, 32GB, RX Vega 56, 500GB Crucial P1, 2TB Ultrastar, Xonar DGX, XL2730Z + G2420HDB
HTPC: i5-2500K, DH67GD, 6GB, RX 580, 250GB MX500, 1.5TB Barracuda
Laptop: MacBook6,1
 
synthtel2
Gold subscriber
Gerbil Elite
Posts: 858
Joined: Mon Nov 16, 2015 10:30 am

Re: PhysX Just Went Open Source

Mon Dec 10, 2018 4:20 pm

Topinio wrote:
If NVIDIA had fully optimised it for CPU in 2008-2009, how do you define that please? SSE 4.1, 4.2 and/or 4a? Capable of using 4 threads?

SSE3 and beyond are mostly niche, and the horizontal / dot product instructions (handy in theory for vectorizing code not originally designed with vectorization in mind) aren't massively faster than doing the same thing in a scalar manner. 2C2T CPUs were still pretty common at the time Kanter wrote that, and the median new game was already around 2T wide if not more. Nothing else would have represented a boost quite like using SSE 1 or 2.
 
DragonDaddyBear
Gerbil Elite
Posts: 788
Joined: Fri Jan 30, 2009 8:01 am

Re: PhysX Just Went Open Source

Wed Dec 12, 2018 8:05 am

I just spent like 5 minutes trying to understand this SSE stuff, which got me looking at AVX. I gained a new respect for you true computer programmer types (general app writers people are not your equal) and new insights to your world. I came away thinking that everything in a computer is a mathematical model that executed in binary. Would you say it's important to view the code you work with daily as a mathematical model first or does it really not matter? I ask because I avoided computer science because I hated math (30 minutes of homework in high school pre-calc for just a few problems was depressing). I'm beginning to see the value of higher levels of math for you types. Not really sure what I'm getting at other than sharing my respect to those of you who know how to use an instruction and when and which version, etc.
 
biffzinker
Gerbil Jedi
Posts: 1991
Joined: Tue Mar 21, 2006 3:53 pm
Location: AK, USA

Re: PhysX Just Went Open Source

Wed Dec 12, 2018 8:59 am

DoomGuy64 wrote:
while the pcie 1x bus likely was a bottleneck

All cards from ASUS, BFG, and ELSA used a 32-bit PCI slot. Only Dell had their own card with a 1x PCIe slot.
http://physxinfo.com/wiki/Ageia_PhysX_PPU#PhysX_cards
It would take you 2,363 continuous hours or 98 days,11 hours, and 35 minutes of gameplay to complete your Steam library.
In this time you could travel to Venus one time.
 
synthtel2
Gold subscriber
Gerbil Elite
Posts: 858
Joined: Mon Nov 16, 2015 10:30 am

Re: PhysX Just Went Open Source

Wed Dec 12, 2018 4:31 pm

I don't like theoretical math at all, but advanced math becomes easy when I've got a concrete application for it. I don't think of my programming in math terms, I think of my math in programming terms.
 
dragontamer5788
Gerbil First Class
Posts: 180
Joined: Mon May 06, 2013 8:39 am

Re: PhysX Just Went Open Source

Wed Dec 12, 2018 4:46 pm

DragonDaddyBear wrote:
I just spent like 5 minutes trying to understand this SSE stuff, which got me looking at AVX. I gained a new respect for you true computer programmer types (general app writers people are not your equal) and new insights to your world. I came away thinking that everything in a computer is a mathematical model that executed in binary. Would you say it's important to view the code you work with daily as a mathematical model first or does it really not matter? I ask because I avoided computer science because I hated math (30 minutes of homework in high school pre-calc for just a few problems was depressing). I'm beginning to see the value of higher levels of math for you types. Not really sure what I'm getting at other than sharing my respect to those of you who know how to use an instruction and when and which version, etc.


Computers are just fancy calculators. SSE just does 4 adds or subtracts at a time, while AVX does 8 adds or subtracts at a time. There's not much "math" involved in programming. The vast majority of it is one of two things: Naming stuff, Allocating Resources, and off by one counting errors.

Algorithm design has a chunk of math involved: but algorithm design is relatively niche and not a lot of programmers have any real skill in it. Even then, algorithm design is just counting on steroids: you count the number of instructions, memory accesses, disk accesses... and other major metrics... to try and determine how fast the code can go. Then you make a new design, and count all of that stuff again. There's a lot of complicated math out there to help you count (ie: Big O analysis, Combinations, Permutations, Statistics), but its really just counting stuff.

Anyway, code that does less things tend to be faster. If one bit of code executes 1-million instructions, while a 2nd bit of code executes 10-million instructions, then the 2nd one will probably be slower. (Not necessarily 10x slower, due to modern ILP / clock rate throttling, and other issues, but... that's how it goes in broad strokes).

---------

In any case, the code is like... open source and available: https://github.com/NVIDIAGameWorks/Phys ... c/Simd4f.h

A clear SSE2 version here: https://github.com/NVIDIAGameWorks/Phys ... 2/Simd4f.h

It seems like PhysX uses a lot of SIMD4f (128-bit) commands, and that directory contains code for AVX, SSE, and NEON (Arm's SIMD instruction set). Sooooo... yeah. I dunno what people are talking about in this thread: the code is clearly SIMD / SSE based at minimum.
Last edited by dragontamer5788 on Wed Dec 12, 2018 4:57 pm, edited 1 time in total.
 
synthtel2
Gold subscriber
Gerbil Elite
Posts: 858
Joined: Mon Nov 16, 2015 10:30 am

Re: PhysX Just Went Open Source

Wed Dec 12, 2018 4:55 pm

PhysX 3 is SSE-heavy, but PhysX 2 was the contentious one.
 
sweatshopking
Graphmaster Gerbil
Posts: 1375
Joined: Fri Aug 15, 2008 10:37 am

Re: PhysX Just Went Open Source

Wed Dec 12, 2018 5:24 pm

I DON'T NOW ANYTHING ABOUT CODING, BUT I AM SO MAD ABOUT PHYSX AND BATMAN CAUSE THE FOG RAN BADLY FOR ME

Who is online

Users browsing this forum: Redocbew, synthtel2 and 3 guests