Looking to build a new computer for Mastercam 2020/Camplete - Page 3
Close
Login to Your Account
Page 3 of 3 FirstFirst 123
Results 41 to 48 of 48
  1. #41
    Join Date
    Jul 2018
    Country
    UNITED STATES
    State/Province
    California
    Posts
    137
    Post Thanks / Like
    Likes (Given)
    33
    Likes (Received)
    23

    Default

    Quote Originally Posted by Mtndew View Post
    That hasn't been my experience to be honest. 2018,19,20 and even 2021(beta) seem to all run the same performance-wise in fact I would say that 2020 is better than previous versions when calculating toolpaths.

    One thing that some people overlook and can affect performance is Windows. It can/does get cluttered up over time and if you have the time or patience, you can do a windows refresh or even a full clean install of Windows. I've done this a couple of times and the difference is night and day.
    Ide be into a fresh install, I dont use my computer for anything other than mastercam, I certainly won't be losing anything.

    As for performance by year, you might be absolutely right, I know when they switched the interface, (2016-2017 maybe) I noticed a huge decrease in performance. Perhaps the rest of the years it's just been my system getting slower.

  2. #42
    Join Date
    Apr 2018
    Country
    UNITED KINGDOM
    Posts
    2,546
    Post Thanks / Like
    Likes (Given)
    0
    Likes (Received)
    1202

    Default

    Quote Originally Posted by gregormarwick View Post
    Generally when a post contains this much nonsense I only respond to it if there's a chance of any of it getting disseminated as fact. It honestly reads like your knowledge of computers ends abruptly at around the year 2k and everything else is just invention.
    Everyone is entitled to their own opinion. I don't drink the kool-aid, you like it. That's fine.

    But I will mention that HP Enterprise just spent a couple hundred million licensing Silicon Graphics "International" numalink machines so they could sell them into the enterprise hpc market. It is unfortunate their guys did not know that all they really had to do was stick a bunch of xeons into a box and use these modern api's. Maybe they could have Siri tell the compiler to optimize their code and voila ! Out the other end, the unicorn !

    I also found this sale by "Silicon Graphics International" interesting SGI went bankrupt (for the second time) in 2006. They shit on all the "stakeholders", also their suppliers and customers. Miraculously, ten years later some hidden remnant comes out of the closet with "intellectual property" worth hundreds of millions to sell.

    That's the software industry we all know and love. Crooks.

    Very similar to your claims about APT in the other thread.
    APT works. It does five axis, as demonstrated by many of the airplanes flying our skies - tho not the 737 Max. It is public domain. It was the grandparent for NX. One version I have cost $300. I programmed hundreds of parts with it.

    Do you have a problem with that ?

    Let's leave the world of imagination and make this concrete. For one example, I did all the LmaRR Discs (all components and the dies except I didn't do the spinning, some models as few as four or five parts) for decades in APT. Nothing special, but $300 vs $15,000+ plus maintenance, what's that over ten or fifteen years ? okay, I'm an idiot. I shoulda bought modrun software ! It would have been so much better ! I coulda saved fifteen minutes !


    Apple, contrary to what you claim, have probably the best implementation of a thread dispatcher in any current mainstream OS in the form of Grand Central Dispatch, which is crazy efficient when software is written to properly utilise it.
    And this is why, when you put a CD with butchered names into the newest latest Apple airweight supercoolifragilistic expialadocious laptop, the arfing desktop locks up for five minutes. In fact, the arfing desktop locks up way too frequently for other undiscovered causes. It's infuriating.

    What you say is/may be true in principle. But on my planet, it has not worked out in practice.

    Just out of curiosity, do you ever build software ? Without using the "latest greatest" version of gcc and all that crap ? My experience has been that 20% of the time it is great, the software is well written and thoroughly tested. Maybe 40 or 50% of the time it is okay, you can make it work. And a full 30 or 40% of the time it is absolute shit.


    Regarding ethernet/numalink - Distributed compute does not treat ethernet as a CPU bus obviously... Some data to be processed is bundled up in a packet and sent wholesale to the remote computer where a resident process works on it and sends the results back. This is not a new thing by any stretch so idk why you think this is weird.
    What I thought was weird is that Mastercam, which is NOT that computationally intense, would be benefitted by this when, as you say, modern multi-core cpus and that incredibly fast memory system should spit it out wham bam thankyou ma'am in seconds on the local workstation. That was what I thought was weird. Mastercam is not computing weather forecasts ... (and even the latest hot-shit European weather forecasting machine is once again ssi, not distributed.)

    waterline/z level also where each vertical pass can be processed independently and link moves considered later.
    This is an interesting idea, and if true the op should see that the first level of his cl calculation would take a long time but all the rest would already be done at the same time the first was finished.

    Is this what happens ?

    I generally agree that cad/cam developers have been slow to implement real multithreading, but the fact is they all do to some degree nowadays. No reason to cast aspersions towards op's observations, what he stated is perfectly possible.
    No aspersions were cast. I do not think his test was accurate though. Most people are not aware that in a multi-tasking os there are a lot of other things going on that have nothing to do with the program being tested. So, as an extreme example, was Windows doing an update while he looked at the cpu-meter ? An anti-virus program running a scan ? Several of those Windows "services" doing their thing ?

    It is actually pretty difficult to shut all that down for a real test. Most people don't know that, it's not an "aspersion" to doubt what he saw is pertinent to the "how many threads does Mastercam use ?" question. There are compiler tools that will run just the program and report on what it is doing. That's what one would need to do to get a true answer.

    Sorry, Brian T, no aspersions intended But partially because of the reasons Gregor says I am full of shit, I can't understand why your Mastercam is so slow. It makes no sense. That program ran fine on much older hardware. We've all seen that. That's why I'd be interested to see what you are doing, because people have made stuff in Mastercam for decades now, without wondering whether their i7 was actually a 286

  3. Likes empwoer liked this post
  4. #43
    Join Date
    Jul 2018
    Country
    UNITED STATES
    State/Province
    California
    Posts
    137
    Post Thanks / Like
    Likes (Given)
    33
    Likes (Received)
    23

    Default

    Quote Originally Posted by EmanuelGoldstein View Post

    It is actually pretty difficult to shut all that down for a real test. Most people don't know that, it's not an "aspersion" to doubt what he saw is pertinent to the "how many threads does Mastercam use ?" question. There are compiler tools that will run just the program and report on what it is doing. That's what one would need to do to get a true answer.

    Sorry, Brian T, no aspersions intended But partially because of the reasons Gregor says I am full of shit, I can't understand why your Mastercam is so slow. It makes no sense. That program ran fine on much older hardware. We've all seen that. That's why I'd be interested to see what you are doing, because people have made stuff in Mastercam for decades now, without wondering whether their i7 was actually a 286
    No problem at all! I've said all along (and I think this thread has proven my point) that I don't actually know what I'm talking about. Furthermore, I actually haven't been able to replicate my original test, which is why I haven't posted a screenshot. It looks like what actually happens is they all spike for a second, then all but 2 drop down. Perhaps I regenerated a rest milling op or something that works with other toolpaths in the background.

    Also perhaps my computer isn't as slow as I think, however, it seems to me I still should be able to throw some money at it to speed it up.

  5. #44
    Join Date
    Feb 2007
    Location
    Aberdeen, UK
    Posts
    3,614
    Post Thanks / Like
    Likes (Given)
    1250
    Likes (Received)
    1382

    Default

    Quote Originally Posted by EmanuelGoldstein View Post
    Everyone is entitled to their own opinion. I don't drink the kool-aid, you like it. That's fine.
    I really don't want to get drawn into internet pissing match, not least because we're going wildly off-topic and arguing about things that nobody else ITT cares about...



    Quote Originally Posted by EmanuelGoldstein View Post
    ...numalink...
    You brought it up for some reason when we were talking about distributed compute. Numalink is what it is - HP need it for their mainframes. So what? It has nothing whatsoever to do with anything that we're discussing...

    Quote Originally Posted by EmanuelGoldstein View Post
    ...APT...
    APT is a different argument. I used it to highlight your propensity for making false equivalence arguments.

    Quote Originally Posted by EmanuelGoldstein View Post
    ...newest latest Apple airweight supercoolifragilistic expialadocious laptop...
    I use OSX for about half of everything that I use a computer for. It's rock solid in my experience.

    Quote Originally Posted by EmanuelGoldstein View Post
    Just out of curiosity, do you ever build software ? Without using the "latest greatest" version of gcc and all that crap ? My experience has been that 20% of the time it is great, the software is well written and thoroughly tested. Maybe 40 or 50% of the time it is okay, you can make it work. And a full 30 or 40% of the time it is absolute shit.
    Actually, yes to the first question. To the second question, kind of - I started out programming when I was a teenager in the 90s, so I have experience with older tools and methods and different platforms, but I don't maintain old code or anything like that - only work with current toolsets.

    We can definitely agree that these days the general quality of commercial software is pretty bad. There are a lot of very low wages in that space, with a lot of inexperienced and poorly educated people writing code and making design decisions, and everyone is moving to a rolling release model so they don't have to spend money on QA, and just let the early adopters take the hit coughmicrosoftcough

    Quote Originally Posted by EmanuelGoldstein View Post
    What I thought was weird is that Mastercam, which is NOT that computationally intense, would be benefitted by this when, as you say, modern multi-core cpus and that incredibly fast memory system should spit it out wham bam thankyou ma'am in seconds on the local workstation. That was what I thought was weird. Mastercam is not computing weather forecasts
    Yes, it should be faster. Given the raw power of modern cpus, everything could be much faster. But you'll go a long way these days to find someone who can rework a function in assembly. Optimising takes time, low level programming takes time. Development is much faster, relatively, because of high level languages and abstracted apis built layer on layer on top of each other, but it's computationally expensive.

    On top of that, it's important to understand how much work a modern cam system is actually doing compared to what they used to do.

    Collision detection and avoidance, continuous tool vector optimisation, dynamic tool load and engagement normalisation, dynamic path filtering, machine dynamics optimisation etc. etc.

    Quote Originally Posted by EmanuelGoldstein View Post
    No aspersions were cast. I do not think his test was accurate though. Most people are not aware that in a multi-tasking os there are a lot of other things going on that have nothing to do with the program being tested. So, as an extreme example, was Windows doing an update while he looked at the cpu-meter ? An anti-virus program running a scan ? Several of those Windows "services" doing their thing ?
    Typically on something like an i7 on windows, backgrounds processes will use negligible cpu time. A virus scan for example will always be io bottlenecked, and might put 20% load on one core. On such a typical system, if all the cores are pegged it is, 999 times out of 1000, because of what you are doing in the foreground.

    If OP witnessed all eight threads fully loaded while doing something in Mastercam, then unless he was raytracing something or rendering a video for youtube in the background, it was Mastercam.

    Quote Originally Posted by EmanuelGoldstein View Post
    There are compiler tools that will run just the program and report on what it is doing. That's what one would need to do to get a true answer.
    You're talking about a profiler, and generally they're not useful outside of the development environment because they inject hooks into the binary at compile time in order to work. On windows, a typical profiler will report the number of threads, but will not tell you anything about their core affinity, as that is determined dynamically by the scheduler.

    Perfmon.exe is the simplest way to get hard data on this as it will let you log the cpu time used by a specific process as a percentage of total cpu time.

  6. #45
    Join Date
    Apr 2018
    Country
    UNITED KINGDOM
    Posts
    2,546
    Post Thanks / Like
    Likes (Given)
    0
    Likes (Received)
    1202

    Default

    Quote Originally Posted by gregormarwick View Post
    We can definitely agree that these days the general quality of commercial software is pretty bad.
    For two people that agree, we sure managed to turn this into an argument

    I do agree with you that hardware is much better. I just don't think that commercial software has improved for the last ten or fifteen years. A bugfix here and a bugfix there but real improvements ? I bet in a blind test people couldn't tell the difference between XYZApp 2006 and XYZ 2019. Or maybe they would prefer the old one !

  7. Likes gregormarwick liked this post
  8. #46
    Join Date
    Jun 2013
    Country
    UNITED STATES
    State/Province
    Washington
    Posts
    85
    Post Thanks / Like
    Likes (Given)
    2
    Likes (Received)
    15

    Default

    The configuration you have is not exactly wimpy! Simply increasing number of cores and speed is unlikely to get you even close to an order of magnitude jump in performance. I suspect it comes down to how well Mastercam load balances across multiple cores.
    Most moderns cpu's are already 64 bit but that doesn't buy you anything unless your program is compiled for 64 bit.

    Some serious conversation with Mastercam might help but I wouldn't hold my breath. Good luck.

  9. #47
    Join Date
    Sep 2013
    Country
    UNITED STATES
    State/Province
    Minnesota
    Posts
    298
    Post Thanks / Like
    Likes (Given)
    96
    Likes (Received)
    64

    Default

    A bit late to the party...

    Not too sure about MC but many cad/cam/cae software is floating point calculation intensive. Old amd cpu's were really bad for this because they used to share a FP processor between two integer cores but I have no idea what ryzens are doing. It seem like they keep this info somewhat hard to find for the user.

    The big advantage to Xeon is multiple socket support and ecc ram support. Not sure if ryzen supports those...

    Windows home version used to not support multiple sockets but 10 Home may be different and I think MS opened up Pro as well; used to support only two sockets but now may be more. Make sure to have hyperthreading enabled for MC but check on other applications it's very specific.

    There seems to be a lot of discussion surrounding core speed with regard to number of cores and single threaded/multi-threaded. There is still a lot of both threading types in various cam software. You have to decide for yourself on a compromise between 4 fast cores and 6 slightly slower cores or 8 (and more) cores that are even slower. There are some Xeons with a high core count (48 core?) that are relatively fast but the price is high as well. For a starting point it's a good idea to monitor your cpu load for all your common tasks to see which ones are single threaded and which are multi-threaded. If you go for a 4-core cpu then your multi-threaded performance will suffer. My ideal setup would a fast 4-core and then a decent 12-core on a second socket but we all know that isn't supported. My main workstation is a decently fast single 6-core but a build I am going to do for contract work will be either two of these or two fast 4-core cpu's; still haven't decided...currently I see NX crunching hard for some single threaded processes but there are many multi-threaded processes when I watch all six cores crunch for a while, such as in a simulation with the resolution cranked up a bit.

    As for SSD's I would go with a decent M.2 setup. Definitely skip sata3 because it's old school and slow and I would skip pci ssd unless you find one at a decent price point. You could do raid but I wouldn't bother....your big concern should be the CPU.. You probably have enough ram although check your usage. 64GB is the new 32, lol. Your gpu performance will mainly be for redraws, rotations and perhaps high-end renderings if that's your thing. If you find your graphics is really lacking then perhaps look at a faster card and if you overwhelm a single card then you might consider running multiple cards with SLi.

    Too bad cam software isn't more like grid computing. On a test rig I had three gtx 1080's crunching grid work units for protein folding and it totally rocked back when those were top end cards. The neighbor kid thought it was a huge waste because I didn't game with it LOL.

  10. #48
    Join Date
    May 2017
    Country
    UNITED STATES
    State/Province
    Minnesota
    Posts
    1,086
    Post Thanks / Like
    Likes (Given)
    1338
    Likes (Received)
    730

    Default

    Quote Originally Posted by Qwan View Post
    For a starting point it's a good idea to monitor your cpu load for all your common tasks to see which ones are single threaded and which are multi-threaded.
    Yes, but also pay attention to what tasks you end up waiting for the most in your daily workflow. If you have a lot of multi-threaded tasks that you wait a couple seconds for and a few single threaded tasks that you wait minutes for, you're going to save a lot more time overall by streamlining those single threaded tasks even if it makes you wait a little longer for the fast tasks. Also pay attention to how many threads a multi-threaded task is capable of using. It may by capable of using four threads but no more, in which case a 16 thread CPU is still wasting most of its capacity.


Tags for this Thread

Bookmarks

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •