Discussion
Loading...

#Tag

Log in
  • About
  • Code of conduct
  • Privacy
  • Users
  • Instances
  • About Bonfire
@reiver ⊼ (Charles) :batman:
@reiver ⊼ (Charles) :batman:
@reiver@mastodon.social  ·  activity timestamp 2 weeks ago

JSON and IETF JSON feel different in spirit in some important ways to me.

For example, numbers.

JSON doesn't seem to put constraints on numbers. JSON also doesn't seem to suggest that decoding numbers in a lossy way is acceptable

IETF JSON suggests a constraint and even a lossy-ness on numbers that has been included some JSON implementations

I would have preferred that the paragraph in the screen-shot, from the latest IETF RFC, did not exist in there

#JSON #RFC7159 #RFC8259

This specification allows implementations to set limits on the range and precision of numbers accepted.  Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision.  A JSON number such as 1E400 or 3.141592653589793238462643383279 may indicate potential interoperability problems, since it suggests that the software that created it expects receiving software to have greater capabilities for numeric magnitude and precision than is widely available.
This specification allows implementations to set limits on the range and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision. A JSON number such as 1E400 or 3.141592653589793238462643383279 may indicate potential interoperability problems, since it suggests that the software that created it expects receiving software to have greater capabilities for numeric magnitude and precision than is widely available.
This specification allows implementations to set limits on the range and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision. A JSON number such as 1E400 or 3.141592653589793238462643383279 may indicate potential interoperability problems, since it suggests that the software that created it expects receiving software to have greater capabilities for numeric magnitude and precision than is widely available.
  • Copy link
  • Flag this post
  • Block
Felix Palmen :freebsd: :c64:
Felix Palmen :freebsd: :c64:
@zirias@mastodon.bsd.cafe  ·  activity timestamp 9 months ago

Hopefully, there will be another release of #swad soon!

Looking at my test results again, performance should be okay at least for moderately busy sites ... the 1000 requests per second I observed included actual logins, and I didn't even test whether it would also handle more (it probably would), the only issue was with resolving remote names (with that, around 30% of these requests failed because the thread pool was clogged with jobs all waiting for some DNS response), and the recommendation would be: just disable that feature if your site is a busy one.

But I'm really unhappy with RAM usage going up so much. Almost 100MiB resident set after seeing 1000 unique clients all attempting a login is a lot after all.

So, I'll try to move swad to a session-less design. It can't be fully stateless, a rate limiter will be needed, but maybe I can optimize a bit on that.

But the sessions could be replaced. They're currently used for two things:

* Store actual auth information. This could be stored in signed JWTs (json web tokens) on the client instead. I'm already starting to add JSON support to my poser lib 😉

* Store the random challenge for the #anubis-like proof-of-work checker. Could do the same as anubis here: Derive the challenge from request metadata instead, including a timestamp.

Will be quite some work, but could be doable.

Felix Palmen :freebsd: :c64:
Felix Palmen :freebsd: :c64:
@zirias@mastodon.bsd.cafe replied  ·  activity timestamp 9 months ago

Seems a first step is almost done, adding #JSON support to my #poser lib. This could be the foundation for #JWT support in #swad. 😎

Need to do more thorough testing I guess, but at least the two example documents from #rfc8259 work fine ... the test tool does a full #deserialization / #serialization roundtrip (with specific internal representations of the data types supported by JSON).

edit: Look at the "Longitude" value of the second object in the second example 😏 I only noticed myself right now, but of course that's the desired behavior.

Testing JSON serialization in poser with the examples provided by RFC 8259
Testing JSON serialization in poser with the examples provided by RFC 8259
Testing JSON serialization in poser with the examples provided by RFC 8259
  • Copy link
  • Flag this comment
  • Block

bonfire.cafe

A space for Bonfire maintainers and contributors to communicate

bonfire.cafe: About · Code of conduct · Privacy · Users · Instances
Bonfire social · 1.0.2-alpha.22 no JS en
Automatic federation enabled
Log in
  • Explore
  • About
  • Members
  • Code of Conduct