Minimalist Semantic Markup

Welcome Guest
Please Login or Register

If you have registered but not recieved your activation e-mail in a reasonable amount of time, please use our Contact Form for assistance. Include both your username and the e-mail you tried to register with.

Show Posts

This section allows you to view all posts made by this member. Note that you can only see posts made in areas you currently have access to.

Topics - Jason Knight

Pages: [1] 2 3
PC Hardware / Ryzen 5 3600 vs. 3600x, my conclusions
« on: 21 Feb 2020, 10:28:48 pm »
Yesterday I got to have the "joy" of building two Ryzen systems side by side, one each of the 3600 and 3600x. Because I had both on hand I decided to play with them to ... how to put this politely... confirm some suspicions I had.

I tested both at the 3600's voltages and timings, and at the 3600x's voltages and timings, with the coolers. Prime95 set for maximum heat. This was all gone on a Gigabyte X570 Gaming X motherboard, and Thermal Grizzly Kryonaut paste. Boost was set to all cores capable.

The tested coolers are the Wraith Stealth that comes with the 3600, the Wraith Spire which comes with the 3600x, and my spanky new Noctua DH-N15 Chromax that's going into my new media center upgrade next month. I tested underclocks and overclocks on the Noctua only.

Side note, it's really strange looking at a Noctua sink and fans that are black instead of manure brown and beige.

My results, temperatures are a reported high AFTER throttling stabilized. (earlier values discarded)

Code: [Select]
Wraith Stealth
3600 timings/voltages
3600 -- 94C, throttled to 4150
3600x -- 96C, throttled to 3867
3600x timings/voltages
3600 -- 95C, throttled to 4150
3600x -- 96C, throttled to 4100

Wraith Spire
3600 timings/voltages
3600 -- 94C, 4200 no throttling
3600x -- 96C, throttled to 4050
3600x timings/voltages
3600 -- 94C, throttled to 4200
3600x -- 95C, throttled to 3950 ???

Noctua DH-N15 Chromax
3600 timings/voltages
3600 -- 65C, 4200 no throttling
3600x -- 68C, 4200 no throttling
3600x timings/voltages
3600 -- 72C, 4400 no throttling
3600x -- 76C, 4400 no throttling
3600x timings at 3600 voltages
3600 -- 68C, 4400 no throttling
3600x -- unstable, unable to run test
4.6ghz max boost at 1.5v
3600 -- 78C, 4600 no throttling
3600x -- 94C, throttled to 4233

Conclusions? Well, it's clear to me that they're really at the core the same part, and the "real" difference between them is the cooler it comes with. 50mhz performance is within the acceptable limits of most binning processes, and of course ALL the Ryzen 2 chips use the same chiplets...

But the higher speeds and voltages bring about an interesting possibility. You know how when overclocking to get stability you often just strap more cooling to it and up the voltage? This is because lower binned parts can often stabilize with more voltage...

LOWER binned parts... Given that in terms of TDP the 3600x is a 95 watt part, and the 3600 is a 65 watt part, is it possible that the 3600x is in fact the LOWER binning?

From the day they were released I thought it was ridiculous that the 3600x was $50 more for a 200mhz boost limit increase and sucking nearly a third more juice / making a third more heat. That felt like it was just a really crappy and inept overclock. I mean seriously a 25% price increase for a less than 5% clock speed increase whilst having a 45% increase in TDP? Something is WRONG there.

Well, what if the "x" is just a crappy overclock of a lower binned part than its 3600 cousin. That I got the 3600 to 4.6 with no throttling whilst the 3600x crapped out and started spewing heat like crazy could be an indication of this. They may have found a way to sell the inferior part for more money! Just put a bigger sink on it and throw more electricity in there.

Though my sample pool is too small to say for sure. I could have just "gotten lucky" on which chiplets I had in front of me.  This is NOT proof by any measure and is all just conjecture, but it makes sense.

I can see why most of the trustworthy Youtubers (Jayz2Cents, Linus, Stephen from Gamers Nexus) are saying don't waste the extra money on the 3600x.

Side note, it's also comedy when I trust people on Youtube more than I do "real" writers from "real" publishers. When it comes to 'tech? Damned straight skippy!

Snippet Sharing / PHP - pagination
« on: 10 Feb 2020, 09:37:16 am »
Just thought I'd share my pagination routine as on several forums I've seen people struggling with this.

Code: [Select]
function template_pagination($urlRoot, $current, $perPage, $totalArticles) {

if ($totalArticles > $perPage) { // don't bother if only one page

echo '
<div class="pagination">

if (($currentPage = floor($current / $perPage)) > 0) echo '
<li><a href="', $urlRoot, '0">First</a></li>
<li><a href="', $urlRoot, $currentPage - 1, '" title="previous" rel="previous">&laquo;</a></li>';

if (($lastPage = floor(($totalArticles - 1) / $perPage)) > 9) {
$counter = ($currentPage < 6) ? 0 : $currentPage - 5;
if (($endPage = $counter + 10) > $lastPage) {
$endPage = $lastPage;
$counter = $endPage - 10;
} else {
$counter = 0;
$endPage = $lastPage;

while ($counter <= $endPage) {
echo '
<li>', (
($noAnchor = ($counter == $currentPage)) ?
'<span>' :
'<a href="' . $urlRoot . $counter . '">'
), ++$counter, (
$noAnchor ?
'</span>' :
), '</li>';

if ($currentPage < $lastPage) echo '
<li><a href="', $urlRoot, ++$currentPage, '" title="next" rel="next">&raquo;</a></li>
<li><a href="', $urlRoot, $lastPage, '">Last</a></li>';

echo '
<!-- .pagination --></div>';


} // template_pagination

It shows a first link, previous link, 5 before the current page, 5 after the current page, next link, and last link. The first/previous links are omitted if on the first page, the next and last links are similarly omitted if on the last page. If within the last 5 links the number before the current page is increased, and vice versa if within the first five pages.

... and more than enough hooks are present to style it however you like. I suggest setting the LI to display:inline, the anchors and span to inline block so you can pad the top and bottom of them, as then alignment is a simple matter of text-align.

Example use:

Code: [Select]
template_pagination('blogs/list?page=', 3, 10, 63);
Where you want the href to read something like "blogs/list?page=4" for the next page, you're currently on page 3, you want ten articles/sections per page, and there are 63 articles in total on the site. It handles all the rest of the math for you.

The Zoo / Iowa caucuses "mobile app failure"
« on: 4 Feb 2020, 03:29:37 am »
Just which incompetent halfwitted derp thought that using MOBILE CRAPPLETS to report voting results was a good idea? Security? Reliability? Who caress about that, so long as some little dipshit (probably the 15 year old nephew of someone in authority with the IOWA democratic party) unqualified to write "hello world" in C can sleaze out something in Buildfire or Appery?

Seriously, all that bitching about frameworks, WYSIWYG's, and "App Builders" I'm always on about?

Well here you go. Perfect example of why mobile crapplets are dipshit bullshit that should NEVER be used for any serious task.

Lands sake, putting voting results out over MOBILE as the reporting method? Whiskey Tango Foxtrot!

Sad part is, I've got this nagging feeling in the back of my head that the entire "delay" is the result of Bernie winning and the party trying to come up with excuses to discount certain votes so they can hand it over to creepy Joe.

JavaScript / To extend system objects or not?
« on: 2 Feb 2020, 10:29:17 am »
Years ago I heard that extending system objects like "Element" or "Math" was "bad".  Whilst I knew that legacy IE (of which I'm out of ***'s to give, my ***'s have run all dry) didn't do proper inheritance to live DOM Elements, in modern browsers what reasons are there for extending Array, String, Element, Object, Node, etc, etc being a "bad thing" you "shouldn't do".

The only one I know of is namespace collisions, and even in that case you could simply use uniform prefixes -- like a double underscore -- to mark them as "proprietary".

For example, what's wrong with:

Code: [Select]
Node.prototype.__flush = function() {
while (this.firstChild) this.removeChild(this.firstChild);

That would make it a "forbidden" technique just because it is being applied to Node?


Code: [Select]
get : function() {
return this.replace(/[&<>"']/g, function(m) {
return '&#38;#38;#' + m.charCodeAt() + ';';

The reason I'm asking is my original version of elementals was going to work this way, and I'm thinking on dragging Elementals 4 back to this, since IMHO:

Code: [Select]
var myString = 'This is a <test>';

Would be vastly superior to:

Code: [Select]
var myString = 'This is a <test>';
// where _ is my elementals core object

Probably faster executing too. I know I was told legitimate reasons not to use this approach a decade ago, but damned if I can remember what they were. Was it literally just "IE7/earlier can't do that" and that's it?!?

Trying to challenge my own assumptions here; assumptions that may have been based on misinformation, misunderstanding, or that flat out simply no longer apply years later.

Whilst for the most part one should always try to make downloads be a server-handled thing, sometimes it's faster/easier to generate data in JavaScript... but how do you take a JavaScript variable and make a download out of it without spitting it back to the server?

Well, here's how:

Code: [Select]
function fileDownload(filename, content, mimeType) {
var a = document.createElement('a'); = filename;
a.href = 'data:' + (
mimeType || 'text/plain'
) + ';charset=utf-8,' + encodeURIComponent(content); = 'none';
} // _.fileDownload

We make an anchor element, we set the download attribute to our desired filename, then in the href we set data:text/plain (or whatever mime-type you want) with the charset followed by a comma, then a URI encoded version of our data. This anchor when clicked on will magically turn that data: into a download with the DL attribute as the filename.

From there we just make sure it's display:none, append it to the body, trigger its click even, then remove/delete it.


Code: [Select]
fileDownload('test.txt', 'This is a test');

Will trigger a browser download "test.txt" which contains "this is a test". Easy-peasy lemon squeezy.

General Discussion / ... and the crApple stupidity continues
« on: 11 Dec 2019, 01:17:43 pm »

Audio was one of the few places I ever defended Apple, because of the out of box ASIO support and quality audio chips... the latter went away about a decade ago as they derped their way into cutting EVERY possible technical corner; which is why in terms of "support" components" those multi-thousand dollar Mac's are no better than a $300 black friday Walmart special...

But screwing over all those reasons for professional audio workers choose apple just wasn't enough for them. No, they had to implement their derpy broken "security chip" BS that breaks external USB audio devices -- devices some professionals wouldn't even have resorted to if the built-in audio wasn't cheap-ass crystal or realtek now... aka no better than your average $30 bargain basement no-name Chinese motherboard.

Congratulations crApple, you've just made hackintoshes have better professional audio than your own machines on the SAME external USB devices.

Hence, the only thing "professional grade" about Macs anymore are the halfwits, morons, and fools who still cling to the notion there's ANYTHING superior about these steaming piles of overpriced junk.

WHY are people dumb enough to continue buying from this damned company of two-faced sleazy dishonest dirtbags with their anti-consumer predatory practices?!?

Site Support / Apple mobile stupidity MAYBE fixed?
« on: 9 Dec 2019, 03:42:39 pm »
For some reason Apple -- and ONLY Apple products -- have been choking on the 301 redirects to force http to https, which is annoying since if you just type in the domain name they assume it's http and don't even try the alternative.

Made worse by the fact their error message says it might have permanently moved -- what the 301 is trying to flipping tell it.

They seem to send a mangled request of some sort I'm still trying to sort out the how/why of -- it's like mangled HTTP 2 headers under a 1.1 declaration -- but for now I THINK I may have thrown enough changes at how they work for them to stop being so stupid about it.

It works in the emulation under xCode, can anyone with a real device that was having problems confirm?

General Discussion / Favorite OS "nobody" uses?
« on: 7 Dec 2019, 10:52:07 pm »
I've been a bit of an OS whore over the years -- be it every flavor of linux distro (where they all suck equally as desktop OS), attempts at cloning windows like ReactOS, or other flavors of Unix and *nix likes such as Darwin, FreeBSD, QNX, etc.

But I'm more interested in the real "fringe" OS that few people have heard of and fewer still ever used.

In my case the favorite is BeOS, and by extension Haiku. If Haiku was more mature, had a  better choice of browsers, and had a better selection of applications, it would be my go to. It feels so snappy, responsive... even the original BeOS was ridiculuosly well performing, back 20 years ago it could make a K6-2/450 feel like a bleeding edge hardware machine of today in how the UI performed and responded to input, or how well its pervasive multi-threading forced "give the user something" to the foreground. It was ridiculous how it could take 1990's hardware and handle things like multiple DVD quality video stream playbacks side-by-side on hardware where just one video would bring the machine to its knees in Windows and Linux... much less the realtime audio capabilities that make a BeBox as iconic a part of many recording studios as the Atari ST.

Sadly, I think said "pervasiveness" of multithreading, where EVERY program requires at LEAST two threads (input and display) is what makes there be so little software written for it. It takes an entirely different mindset and architectural design approach to write even a "hello world" for it, in a way that's so radically different from nearly every other GUI based OS.

But anyone have anything they love that's even more obscure?

General Discussion / Yay, first forum DDOS
« on: 5 Dec 2019, 12:12:03 am »
That was fun. Thankfully it was all from two different data centers spamming joins and then spamming the re-send e-mail.

Since there's no reason for data centers to be sending user connections, IPTables is your friend.

Sorry for the brief downtime, they struck whilst grandpa here was napping.

I'm once again working on doing 360 degree projections in a 6:1 aspect across the top of the screen. The best way seems to be arctangent, but that means I can't use any of the native perspective calculations.

This means all my vector calculations can't seem to be done on the GPU in an efficient manner, making the game in question be painfully tied to the CPU. I mean I could try leveraging shaders, but sending the math to the GPU and then back to the CPU to send to the CPU again seems... bad.

Is there any way anyone's aware of to simply tell either OpenGL or Vulkan to widen the FOV to a 360x60 perspective, or is the Z divide simply so ingrained in the perspective map that it just can't do it.

Side note, I hate matrix math. How anyone can think that 64 multiplies of 32 values can be more efficient than two multiply, two divide, and two add/subtracts of six values utterly escapes me.... much less everyone feeling it was so much better a way they ended up throwing silicon at it.

But again, I do my 3d math using atan2 so...

JavaScript / What do you think's missing from JS' library?
« on: 9 Nov 2019, 08:05:39 am »
Elementals.js 4.0 is about ready for launch -- what's really holding it up from hitting RC status is that the documentation needs a complete rewrite as 4.0 is radically different from 3.x, if anything it is closer to how 1.0 was supposed to work as I've dialed back the clock a bit, whilst also dropping IE9/earlier support.

But before I write off on it as feature-complete, is there anything people use in JavaScript for wrapping functions or other helpers that you would consider essential?

elementals.js is more about polyfills, a make helper, views, and an ajax wrapper... but I'm looking for small useful bits of functionality that just plain feel missing from JavaScript.

Any ideas? Mind you, I'm using defineProperty a lot so actual DEFINE as well as calculated properties are possible.

Just looking for anything I don't normally do that might be useful. I know that's a bit vague, but I'm looking for suggestions no matter how wild or mundane.

JavaScript / When NOT to make a wrapping function.
« on: 8 Nov 2019, 04:39:21 pm »
Talking about frameworks -- and even just simple helper functions -- with folks the past month has made me realize a common mistake that I've made myself.

Creating functions to wrap around other functions to save time.

@jmerker did this with is little "doc" function to wrap document.getElementById. I used to make all sorts of little functions like that myself...

But really is doc('test') actually CLEARER or easier to understand? What about the overhead of the function call slowing down an already slow function?

In terms of overhead, how people are doing this with arrow functions just makes it worse. People are using arrow functions as shortcuts to things that aren't enough characters to complain about in the first place. This "wah wah, I don't wanna type" attitude to the point of making newer code painful to even make sense out of, whilst adding massive overhead is just the worst of bad practices of the past three decades put on a pedastal as the latest hotness.

Nowhere is the flaw in this type of thinking more evident than with Array.forEach and other iteration routines that rely on callbacks.

See the example from MDN's site:

Code: [Select]
const items = ['item1', 'item2', 'item3'];
const copy = [];

// before
for (let i=0; i<items.length; i++) {

// after

What's the justification for this change to a technique that in production code seems to introduce 8x or more slowdowns because of both callbacks and slopper iteration methods? How is this an ACTUAL improvement? The "before" being clearer because you can see the logic, and faster executing because we don't have the overhead of a function call involved.

The arrow function version of that:

Code: [Select]
items.forEach(item => {

As it's even more cryptic on top of the speed penalty.

But those are nothing compared to "one liners" where people making functions to replace existing ... functions... the what now?

I think every beginner does this at the start -- I certainly did -- but when you get down to things like:

Code: [Select]
function id(target) { return document.getElementById(target); }

Are you really making things any better with the extra function overhead penalty JUST because you don't want to type document.getElementById? It's a bit hurr-durrz to me and I think it stems from the difference between minimalism and byte obsession.

Minimalism is about making things as clean and simple as possible, and sometimes -- just sometimes -- that even means LARGER code as code clarity is as important as the byte count. Byte obsession on the other hand is about trying to minimize the amount of code typed in or delivered, even at the cost of code clarity, maintainability, sustainability, or ease of use.

Byte obsession often being knee deep in "false simplicity". It might LOOK simpler because there's less of it, but the end result dips below the actual complexity needed to do the task efficiently or easily. Hence making it harder to actually do no matter how simple it looks.

EVERY time you call a function you are introducing overhead. The more local vas that function has, the more scope blocks it has (such as let/const), the more parameters you pass to it, etc, etc the more that overhead.  In execution time, in memory allocation, in garbage collection, so-on and so-forth.

Hence why it's painful to see such blatantly inefficient code like this:

Code: [Select]
function Counter() {
  this.sum = 0;
  this.count = 0;
Counter.prototype.add = function(array) {
  array.forEach(function(entry) {
    this.sum += entry;
  }, this);
  // ^---- Note

const obj = new Counter();
obj.add([2, 5, 9]);
// 3
// 16

Being promoted as a good practice, when this:

Code: [Select]
function Counter() {
  this.sum = 0;
  this.count = 0;

Counter.prototype.add = function(array) {
for (var i = 0, iLen = array.length; i < iLen; i++) this.sum += array[i];
this.count += array.length;

const obj = new Counter();
obj.add([2, 5, 9]);
// 3
// 16

Is simpler, clearer, and executes many MANY times faster through the removal of the use of callback functions... much less the derpitude of an anonymous function that's called more than once and scope juggling "this".  Excuse my moving the obvious out of the loop, but that's some serious hurr-durrz logic!

This sudden obsession with making every single stupid line contained in its own function is just so moronic in 90%+ of the usage cases.

I can see beginners making these types of stupid choices -- hell, I did it myself when I first came into JavaScript from C++ and Ada two decades ago -- but when such inefficiencies and bad choices  start seeping into being part of the programming language itself, I start to worry.

JavaScript / Is Array.each a bad idea
« on: 4 Nov 2019, 04:06:12 pm »
I had someone do the blanket "It's horrible, never use it". For those of you who think I sound that way, remember I typically say "99% of the time" for anything that's language construct level with the possible exception of the <style> tag.

I wasn't convinced it's horrible, but I can see how it introduces inefficiencies. There are times where I could see doing

Code: [Select]
myArray.each(function(e) { /* do something with e */ });

Could result in cleaner/simpler code, or reduce the total code if the function being passed is used in multiple different spots over and over... but at the same time most of the EXAMPLES of its use are inherently junk.

As a one-off function, anonymous function, or the dreadfully and painfully cryptic arrow function trash (no offense folks, but damn) the introduction of a function call -- with the push of the current execution point, push of any parameters, allocation of locals, then the reverse after execution of de-allocation and releasing the stack is a massive amount of wasteful overhead.

In assembler this is why we use Macro's that are inserted by the compiler; the resultant code is larger than if we used 'CALL/RET', but it executes many, many times faster.

To that end let's say you had gotten all your menu single-depth LI via querySelectorAll.

Code: [Select]
var menuLi = document.querySelectorAll('#mainMenu li');

... and let's say we want to do something simple like add a raquo to each one. The fastest traditional approach:

Code: [Select]
for (var i = 0, li; li = menuLi[i]; i++) {

Is small, simple, easy enough. Using "each" and arrow functions:

Code: [Select]
menuLi.each(li => {

Might seem cleaner, but we can't "see" the logic and it executes inherently slower since we've got function overhead involved.

NOT that I would use either technique, since I go for the DOM.

Code: [Select]
var li, mainMenu = document.getElementById('mainMenu');
if (li = mainMenu.firstElementChild) do {
} while (li = li.nextElementSibling);

Which seems like a lot of code, but executes WAY faster than either of the above... (Of course if you want to piss on performance, use innerHTML instead of appendChild/createTextNode)

I dunno, like arrow functions I'm not convinced that in most cases Array.each is little more than false simplicity - but at the same time I can see the appeal of clarity it COULD bring if decoupled from the vague / painful / silly arrow function trash.

WhaddaYouFolksThink? (yes, that's all one word. Kinda like qwitcherbellyakin) :P

Reason I'm asking is I'm tempted to pull the polyfill for this and many other ECMAScript additions from Elementals 4.0 for being a waste of time and code.

Whilst most JS coders are vaguely familiar with Element.appendChild and may have heard of Element.insertBefore, the possibilities using the various next/previous/child methods and how insertBefore targets a child element and not the Element of the method often eludes understanding...

... when really there are four possibilities that really opens doors to what you can do on the DOM.

Let's say we have this:

Code: [Select]
  <div id="section">Section 1</div>


Code: [Select]
section = document.getElementById('section'),
testTextNode = document.createTextNode(' HEY! ');

Let's say you want to insert testTextNode into section at the end of its text. That's easy.

Code: [Select]

That would give you the rought equivalent of:

Code: [Select]
  <div id="section">Section 1 HEY! </div>

But what if you want it before? Well, that's where insertBefore comes into play... but the second paramenter of insertBefore is the trick, as you have to say WHAT to put it before. How about the firstChild?

Code: [Select]
section.insertBefore(testTextNode, section.firstChild);

Sure enough, you get roughly this:

Code: [Select]
  <div id="section"> HEY! Section 1</div>

Neat, huh? Well let's dial it up a notch. How about inserting it BEFORE #section?

Code: [Select]
section.parentNode.insertBefore(testNtextNode, section);

... and we get:

Code: [Select]
   HEY! <div id="section">Section 1</div>

as our result. We can also put it after.

Code: [Select]
section.parentNode.insertBefore(testTextNode, section.nextSibling);

Which gives us:

Code: [Select]
  <div id="section">Section 1</div> HEY!

So to sum up:

Code: [Select]
// appendChild for last in targeted element

// insertBefore firstChild for first in target
section.insertBefore(testTextNode, section.firstChild);

// target.parentNode to target places before
section.parentNode.insertBefore(testTextNode, section);

// target.parentNode to target.nextSibling for after
section.parentNode.insertBefore(testTextNode, section.nextSibling);

Powerful simple techniques that I rarely see, and again where the power of the DOM really shines, as you can insert any DOM Node in this manner.

General Discussion / So... who were you this Halloween?
« on: 1 Nov 2019, 12:58:39 am »
Hello kiddies, your friendly neighborhood Joker...

... is just a wee bit curious who else dressed up. I still say Halloween is the most uplifting and inspirational holiday of them all. Why? Because it's the only holiday where people go out of their way to do something nice for the kids of complete strangers.

Every other holiday either revels in personal greed, avarice, or shoving "family" down your throat. Not Halloween. Adults go out, spend a slew of money for stuff to give away to kids in their neighborhood. Suck on that one Santa.

I always try to make it a special time... I also just love the theatre and pageantry of it.


Pages: [1] 2 3