The fact that I was waffling about this got me thinking about two things. First: services like Facebook and Twitter made it simple to post from anywhere, which is how they got everyone to produce content for them. Second: I want to post more on my website instead of other people’s services because I want to own my content — but what the heck does that even mean?
I started waffling about content ownership while playing with Micro.blog. This is a service I backed on Kickstarter in 2017. I used the service a bit when it launched, but never stuck with it. I was very happy to support the project, but it was a strange middle-space between a social service like Twitter and posting to this website. I signed up again last year thinking that I could use it just for posting short-form content from my phone, but that also never really happened. The service is really great, I just couldn’t find a way to make a habit of using it.
One of the values of the Micro.blog team is owning your data. Let’s dig into that:
Does this count as “ownership”? I’m pretty sure the answer is yes, but I still had some questions in the back of my mind. Why do I think Micro.blog allows more ownership than Twitter did? Twitter allowed you to export your data after all. Twitter also allowed you to pay (although only for extra features). Do I own my Micro.blog content more because it has additional export options and can be tied to a domain I control? What if I’m using a default micro.blog
URL instead?
I don’t mean to pick on Micro.blog here. Again, I think the service is great. I just thought there was an interesting juxtaposition between it and something like Twitter. I also spent some time wondering about my current setup. I’d like to think that I own the content hosted on this website!
The content you’re reading now is served via GitHub Pages. It’s hosted on servers ultimately owned by Microsoft. I use the built-in Jekyll publishing flow provided by GitHub to push content to the web. I pay for none of this. Is the main difference between posting here versus, say, Facebook that I have my domain pointed at the content? Heck, do I even “own” my domain? This question might seem silly, but it’s quite scary if you think about it too long. I’m placing a huge amount of trust in my domain registrar, from whom I essentially rent the most important piece of my online presence. If things went wrong here, a lot of things that are very important to me could break.
Obviously, I’m overthinking this. That said, I did find it helpful to break down what I thought “ownership” meant in the context of content on the web.
I think that “ownership” implies some level of control. I want to put my work somewhere where I have the most say in how it’s displayed and used. Do I give up some control by using GitHub? Yes, because I have to agree to their terms of service — but every host has some version of this. With GitHub (as with Micro.blog) I can control what the content looks like and how it’s put into the world.
Any host could potentially remove or disrupt my content, but I feel more in control when I have the ability to easily move a site elsewhere. In the case of this site, it would be very easy for me to move to a service like Netlify, Vercel, or any number of other hosts. This is especially true because this site is all just HTML, and it can be generated locally by Jekyll.
When I used Twitter, part of the deal was that they could show ads next to anything I posted there. When using the first-party app, they also showed trending topics which I often found irritating. I didn’t think about it very much at the time, but it was clear that I was using Twitter’s service, and they were in control over how my content appeared to people. There’s nothing inherently wrong with a trade-off like this, and I got a lot of utility from the service. It was something that bothered me more and more in the latter years, though.
I’m not going to pretend that I’m writing any deep thoughts here, but it is something that keeps popping into my head. I love that there’s a push for people to create more content on the web, and one of the reasons often cited for publishing on your own website is to “own your content”. I think that’s a good thing, but I don’t want to take it as read. Thinking this through also helped me feel better about services like Mastodon and Bluesky gaining more traction. Both have data ownership as an core value, even if most people likely won’t take advantage of that.
]]>Meta advertises the fediverse integration as an unfinished beta. This makes sense! Integrating ActivityPub in a way that serves an existing 130+ million users is an intimidating task. That said, I definitely wasn’t alone in my issues getting my Threads account federated. It was great to see the issue resolved quickly, though. Also, it currently takes several minutes for posts made on Threads to be pushed to a follower on Mastodon. I’m assuming the lag is a growing pain, however.
The integration is also tentative. For starters, users must choose to explicitly enable account federation. I’m not sure if this will ever change, but it does limit the number of Threads users who will federate. For those who do enable federation, posting is currently one-way. A Mastodon user (for example) can follow someone on Threads, but Threads users can still only follow other Threads users today. Also, a user from Mastodon can comment on or boost content from Threads, but the Threads user won’t be notified about it. The only action that does get federated notifications in the Threads UI is a “like”, but the details in that notification are quite limited.
In a post on Meta’s engineering blog, the authors outline these limitations and discuss what’s next. The post mentions that the team are working toward having two-way communication between Threads and fediverse servers, but they also outline some design and safety considerations. For instance, the ActivityPub standard doesn’t currently specify how to implement quote-posts. Threads launched with quote-posting, but Mastodon doesn’t currently support it. The Threads team is using the Misskey ActivityPub extension as a way to signal quote-posting, but it’s not clear if this unofficial extension will gain traction 2. There are also UX concerns around federated Threads users interacting with non-federated Threads users. The Threads team wants the Threads-native UX to take priority over any complexities caused by content federation. This seems reasonable to me, but it demonstrates how tricky adding this functionality to a large system can be.
Even with these limitations, I’m excited to see Threads push this work forward in public. When Threads first launched, I remember many being skeptical that they’d ever follow through on federation. It seems to me that they want to show progress, and the official communications seem earnest and thoughtful about how they’re rolling things out.
But maybe I’m just being naïve. As I’ve written before there are a lot of people who think Meta taking advantage of an open standard means the end of the fediverse… or something. Not pointing fingers, but I’ve seen some open standards advocates very upset by Meta’s use of ActivityPub in Threads. I struggle to understand this point of view. Meta is putting a large amount of engineering effort into integrating with a system that has — generously — about 2 million monthly active users. Is Meta trying to cut the legs out from beneath a possible open source competitor? I guess anything’s possible, but that sure doesn’t seem like an efficient use of resources. I’m pretty confident that people running their own Firefish server aren’t going to move to Threads. And even if Threads “wins”, I don’t think these small communities will be affected if they choose to defederate from it.
There are two potential dangers I see from this. One is that users might choose Threads as the primary network to share content, knowing that it will federate to other ActivityPub services. I saw one instance of this yesterday as Threads fediverse integration was announced. It was between Alex Heath (deputy editor for The Verge) and Eugen Rochko (the founder of Mastodon). Alex is saying that, currently, posting news on Threads gives him more reach. Eugen is making the point that, if you keep posting on Mastodon, eventually Threads users will be able to read your Mastodon content natively. The word doing a lot of work there is “eventually”. I’m optimistic that Threads will allow two-way federation in the next few months, but there’s no guarantee. In the meantime, it’s hard to argue that Threads doesn’t give you a lot more reach.
The other potential issue: fediverse admins defederating from other instances who choose to allow Threads content in. There are a lot of users who are concerned about Threads somehow “stealing” their data, and want to preemptively defederate from Threads because of this. That’s fine, though misguided 3. Part of the promise of the fediverse is that anyone can run their own instances and choose their own rules. But I worry that “purity tests” might be the real end of the fediverse, not Threads.
The sign-up process is quite nice when it’s working. There’s a nice overview of federated networking as part of the sign-up/consent flow. ↩
Another interesting ActivityPub extension by Misskey is isCat
. ↩
Threads only has access to your public data or the data you share with Threads users. That data’s already public, or being shared with Meta explicitly. Anyone with access to the web can scrape and index the public data already. ↩
I first learned of it via a great Verge article, which has more details including a photo of the included one-shot campaign. Also, if you order before April 8th, you’ll get a bonus mimic chest? That is some powerful FOMO-based marketing.
Looking at this, I was curious about when all the LEGO brand partnerships started. As far as I can tell, it seems to have started in 1999 with a Star Wars set tied to the release of The Phantom Menace. More recently, LEGO Fortnite has been a big hit, and the list of LEGO set themes is increasingly filled with IP from other brands. It’s wild to see how much LEGO has embraced the “metaverse” idea.
]]>Titled “The Lost Universe”, the 44–page PDF is a system-agnostic adventure for 4–7 players of levels 7–10. It’s focused around the Hubble Space Telescope, and starts on an alternate timeline Earth where the telescope no longer exists. Each of the players takes the role of someone who took part in the Hubble program and still has memories of it. The introduction has players being transported to a rogue planet where magic is possible thanks to the power of zero-point energy. Now they need to find a way to right the timeline and get home.
The whole thing is very well produced! There are great notes for game masters on how to play NPCs and the PDF itself is well annotated and provides internal links. Even if you’re not interested in TTRPGs, this is a fun little story to read through.
]]>The announcement linked to a developer focused post which stresses that this is a work is in-progress and not really ready for prime time. This isn’t a knock against the announcement — the team is working in the open. However, after reading this I had several thoughts and questions.
It might be helpful to mention the “Bluesky and the AT Protocol: Usable Decentralized Social Media” white paper. It was published earlier this month and gives the fullest overview I’ve seen of atproto’s architecture. It helped me understand some of the technical decisions and trade-offs.
With that, the meat of the announcement is that users can now host their own Personal Data Server (PDS). This server allows for user authentication and data handling through atproto. There’s a sample server provided by Bluesky which can be hosted on fairly modest hardware. 1
The Bluesky team is up-front that the process of migrating to your own PDS could break things. They advise users against migrating their primary accounts. In the same document they also point out that the process is currently one-way only. This will likely change with time, but for now there’s no way to hand things back to Bluesky. It’s also a rather hands-on process: you need to join a Discord and request that your PDS be added to the network. Once you’re accepted, you’ll be kept up to date on Discord about changes and updates. Overall, things are kind of sketchy at the moment.
When I first started digging into the migration process, I was curious about how media files would be moved. Users have been able to export their content from Bluesky for a while now, but the file you get back is not exactly easy to work with. There is some documentation about the export format with code snippets to help you deal with it, but it’s a bit of a faff. It’s interesting to note that media files are not included in this export, and that you need to navigate the IPLD DAG-CBOR objects manually to get that content yourself.
It’s also interesting that the export documentation contains a privacy notice. Because of atproto’s architecture, almost everything is public. By design, you can easily use the export process to download any user’s data.
Looking at the account migration documentation again, there’s a diagram in the above link that shows the process of migrating. This seems to indicate that the server would get this content for you. Reading further down the page, it looks like this is something you need to do manually. There’s some sample code provided, but the document stresses that it shouldn’t be run as-is.
On the whole, this is an interesting first step. But it also shows how early Bluesky is in the process of federation. PDS migration seems shaky at best, and will need a lot of work to get it to a point where most technical people would want to give it a shot.
While I was looking into the migration, I also spent some time digging into some previous frustrations with atproto. The first being the DID PLC (short for “placeholder”) method. One of my larger complaints about atproto is that it’s rather complicated. DID PLC is a a great example of that, and I recommend checking the white paper linked above for a good overview.
It was previously noted in the documentation that they wanted to replace this method “within the next few years”. This has been updated to say it will be supported indefinitely. The main reason the team wants to replace this method is that it’s centralized. If you want to resolve a did:plc
id, you need to ask the server at https://plc.directory/
about it. It’s mentioned that Bluesky wants to make this decentralized eventually. This system is sort of like the DNS of the Bluesky platform, so I think this will take a while.
Another nitpick I had was around indexing and relays. As part of atproto, Bluesky operates an index of all actions (posts, replies, likes, etc.). To create this index, they crawl all PDSs and then act on any new content found in them. While the Bluesky team have made progress on allowing people to host their own content, there’s very little detail on how this system will be decentralized. Even once it can be, this is “firehose” level stuff. It will take a lot of compute/disk/network resources to spin up a new index.
This corner of the system is rather light on detail. There’s a bit of a description in the white paper, but I haven’t found technical details about how this system works. It also seemingly changed since I last wrote. There’s a Bluesky blog post from around that time which describes a “big world with small world fallbacks” system, but mentions of that seem to have been removed from the documentation. A previous version of the “Crawling Indexer” diagram has been replaced with a new one which removes the “Small-world” interconnects. I assume that means it isn’t part of the plan anymore?
After digging into this again, I’m still frustrated by the design trade-offs the Bluesky team is choosing. The system is currently centralized. I don’t doubt that they want fix this eventually, but I feel like they’re often given credit for having solved the problem already. Unless I’m missing something, decentralizing the did:plc
and indexing systems will be extremely difficult.
It’s also still unclear how any of this will be monetized. They partnered with Namecheap last year to sell domains you can use as your username. I guess that’s something, but I doubt it’s going to make much money relative to infrastructure (let alone personnel) costs. My guess is that most people who’d use a domain as part of their username already have the domain they want to use. It seems like they have a good amount of runway, but I’d bet that the service will be less fun to use when investors start looking for a return.
I think people will be surprised by how little privacy there is on the service. Because of the indexing flow, all data currently must be public. As noted above, you can export any user’s data. You can also easily attach to the firehose of all new posts and also just literally download the whole thing. Again, this is all by design and not necessarily bad. I just think it might be surprising to many.
I’m personally a fan of ActivityPub. The standard has several issues, but the key benefit is that it’s quite straightforward. Even as atproto work moves forward, there are still a lot of unanswered questions. It feels like there’s a good deal of second-system effect going on. I’d still prefer my “team” win, but I’m curious to watch Bluesky/atproto develop.
Based on the provided specs, hosting the project on a Digital Ocean VPS would start at $18 USD/month. ↩
First, a bit on static site generators (SSGs). Serving only static content is fast. With an SSG, you’re effectively “pre-compiling” all server-side elements of a site at build time. Once you send the “compiled” version of the site to your server, the files just need to be served. This is the easiest possible thing for a web server to do, so it scales extremely well. A recent post was the number 1 post on Hacker News for over 6 hours. This generated a lot of traffic, but my site never slowed down.
There are many providers who specialize in hosting static content. A popular choice is Netlify. I’ve really enjoyed working with their system in the past, and have recommended it to others. Heck, you can also just use Amazon S3 to host static content.
This site is hosted on GitHub Pages, which uses Jekyll by default. I enjoy using GitHub in general, and their Pages product is free. It’s also powered by Microsoft’s server infrastructure, which is no slouch. Getting set up to host using GitHub Pages is as easy as setting up a repo and pushing content built for Jekyll. Using a custom domain on your site is straightforward and also free. Jekyll was also written with GitHub Pages in mind. It seems like a perfect match.
In August of 2019, Jekyll 4.0 was released. The same day, a GitHub issue was created requesting that GitHub Pages be updated to support the new version. That issue is still open, and as of August 2022 an engineering manager at GitHub has indicated that this update isn’t going to happen anytime soon. Also, previously the Jekyll to GitHub Pages flow was a special case that happened when committing to GitHub. Now it all uses GitHub Actions, putting Jekyll on about the same footing as any other SSG.
As of this writing, Jekyll 3.9.5 is the version supported by GitHub Pages. GitHub continues to maintain the libraries used to support Pages, but I’m not convinced that there’s much appetite for larger changes. If anything starts breaking because Jekyll 3.x is too old, my guess is that we’re going to be told that GitHub Actions is a great alternative.
Because it’s easier, I’m currently running the latest version of Jekyll 4.x locally and still using the default GitHub Pages updating flow. This hasn’t bitten me yet, but it doesn’t seem ideal. Also, Jekyll is also the only Ruby project I use frequently. I should probably set up the same level of library/environment control as I do with my Python projects, but I haven’t so far. I’ve become quite proficient at fixing Jekyll when Homebrew updates break it, but again: not great.
So, am I going to switch right away? No. But I’ve started looking at alternatives. The biggest two are 11ty and Hugo. 11ty has the advantage of being written in JavaScript, so the tooling is familiar to me and I feel I could manage it well. Hugo is written in Go, however it seems quite self-contained. If I’m not using the Jekyll-specific parts of GitHub Pages, I could effectively ignore how Hugo is built and just think of it as a black-box.
Whatever I do, I’m sure there will be tradeoffs. The publishing flow today with Jekyll and GitHub pages is going to be very hard to beat, but seeing some of the cool things people are doing with newer tools like 11ty has me feeling some serious FOMO.
]]>html
element wouldn’t have 100% support on CanIUse.com. Heck, I’ve been using it since 1994 and it worked just fine back then! This led me down a bit of a rabbit hole.
A bit of background, first. Can I Use… is a site that helps web developers track the adoption rate of web technologies. It estimates browser usage, measures feature compatibility, and spits out a number that tries to reflect how available a feature is. It’s a site I’ve been using almost since it launched in 2010 and I’ve always found it really useful.
So why is it currently saying the html
element only has 97.34% support? That’s less than the current support percentage for the audio
element! It also looks like the same is true for the a
and p
elements with exactly the same 97.34% support number.
One thing I learned when looking into this is that a lot of the data on the site actually comes from MDN 1. MDN is another resource I use and trust, so this seems reasonable to me. It also often has stats about feature uptake, so it makes sense for CanIUse.com to piggyback on that.
Looking at the MDN page for the html
element, it has a browser compatibility section. In that are two rows with a lot of red Xs. The first is for the optional manifest
attribute on the html
element. This is deprecated and was never standardized. The second is the related “secure context required”, which is an Editor’s Draft — that is, not something currently on the standardization track. I don’t know how this was previously related to the html
element, but that use was also never standardized and is deprecated.
So, there are two features listed here that almost all browsers correctly do not support. But still, it doesn’t look like this is the reason for the missing 2.66%. There are some browsers that are listed as “Support Unknown”. Adding up all the current usage for these browsers comes to 1.27%. There’s also an entry for Android Browser versions 2.1–4.3 which is listed as not supporting the html
element — which I find highly dubious — but it’s listed as having a usage share of 0%. I suppose there might be some rounding errors here that would bump the 1.27% to 2.66%? But I still find this very unclear. Also, I feel very confident that those older browsers supported the html
element!
So yeah, I don’t have a great answer for this. If you do, please let me know! I’ve always taken the numbers from CanIUse.com with a few grains of salt, but I’ll be adding a few more going forward. I still think it’s a great resource.
UPDATE: rezonant on Mastodon poked me to let me know that he posted a comment on Hacker News that has a possible explanation. The short version: if you switch to “% of all tracked” in the top right beside “usage”, and then add up the “support unknown” browser number, you get 99.98%. It’s a lot easier to see how rounding errors could make this the correct number. Still, I think the way older browsers are handled here is confusing. It doesn’t affect the overall number too much, but in the case of base level elements like html
, it seems odd.
MDN used to stand for Mozilla Developer Network. Now it’s just MDN. I spent a few minutes looking on the MDN site to see if I could find any mention of the full name, but I guess they’re just all in on “MDN” now. ↩
Some ancient history to set the stage. In the early 90s, I got to try Dactyl Nightmare. I remember having to beg my parents for the try the thing, waiting in a long line, and only getting to play for a few minutes. It was amazing! I only vaguely remember the experience, but that low-poly world felt real in a way I could barely put into words.
Fast forward to October 2013: I got to try the Oculus Rift developer kit. I was at a conference where a company was showing off a Paperboy-like prototype using the dev kit and a Microsoft Kinect. It was a bit janky, but still a very cool experience. For whatever reason, it didn’t feel as immersive as I remember that clunky-assed 90s game being. Maybe it had something to do with Dactyl Nightmare being over-hyped by my very young brain. It probably wasn’t helped by the demo being something thrown together quickly for the conference. Also, I was definitely older and more jaded. Whatever the reason, it didn’t seem as magical to me.
More recently I’ve had occasion to try newer headsets from Oculus and Valve. These are great pieces of technology, and I’m glad people are continuing to work on them. But as cool as some of VR experiences were, I could never bring myself to buy a headset. I continue to feel pretty sure that if I did buy one, it would start gathering dust after a week or two. Gaming on a TV is easier and, frankly, more fun. If I could go back in time and tell this to my 90’s self, the little guy’s head would explode. All this to say: I think VR tech is interesting, but I’m not very excited by it.
Back to the present: Apple’s launching a new piece of hardware tomorrow. I really like Apple hardware, and follow their announcements like other people follow sporting events. I want to be more excited about this than I am.
One thing to note here: I’ve only really considered the “headset” space in the context of gaming. Apple has never been good at gaming. Recently they’ve been making motions toward getting serious about the gaming space, but they’ve got a lot of convincing to do. This might just be a “me problem”, as I’m not sure the primary purpose of Vision Pro has ever been gaming related.
But if not gaming, then what? “Spatial computing” seems to be the idea Apple is pushing. One piece of this is having an “infinite canvas for apps”. That actually sounds pretty promising! However, I’m quite skeptical Vision Pro will have many apps I want to use out of the gate. When Apple launched the M1, it enabled iPhone and iPad apps to run in macOS. Sadly, for a variety of reasons, most developers chose to disable their apps from being run this way. A big reason for this was that most devs didn’t have M1 machines yet. They couldn’t confirm the mobile versions of their app would work well on macOS. I’d bet pretty heavily that the same thing will happen for visionOS.
Another selling point of Vision Pro is immersive video. This is something I’ve definitely wanted at points, and I could imagine it being a huge boon for people who fly a lot. But that’s going to be hurt by Netflix and YouTube both choosing not to release apps for the platform yet.
Of course, the position of the product at launch isn’t the whole story. The iPhone was world-changing device, but it didn’t really do much at launch. There was no App Store. You couldn’t record video. Heck, there wasn’t even copy and paste. It was still incredibly novel, but it only existed on one mobile carrier and it cost several times more than what most people thought a phone should. The same is true for the Apple Watch. In both cases, I got the third iteration and continued to use them going forward.
People who’ve had a chance to try Vision Pro seem to think there’s something there. Maybe not enough to justify the current price tag, but it seems like there’s something. Just like when Apple launched their watch, there seems to be a bit of throwing things at the wall to see what sticks. Because of this, the meme is that Vision Pro is a “public dev kit”. But as someone who helps people build software products, advice I always give to clients is: get things into the hands of users as soon as possible. It’s the very best way to see if you’re on the right track. Apple has a great track record for building extremely polished hardware, but the software always takes large turns after version 1.0. I think people forget this.
The one thing I am really excited about regarding Vision Pro is the story. Apple’s doing something different here. Maybe it’s not for me. Maybe it’s just not for me yet. Maybe I’ll try one (once they’re available in the far-flung land of Canada) and I’ll buy one on the spot. Whatever the case, I’m pretty sure they’re going to sell out of initial stock on day one. What comes next will be a lot of hot takes, and I’ll be here for all of them. If this thing takes off, that will be interesting! If Apple has a dud, that will also be very interesting! Let’s see what people say once they start shipping out. I’m excited for all temperatures of take.
]]>Sometimes there’s a puzzled look. Who cares about libraries these days? At least for me, libraries have only got more interesting in the past decade. My plan is to try and get you to look into your library’s services by bragging a bit about what my library offers.
This is usually the reason I bring up getting a library card. I often get asked if I know of good places to start learning some technology. Our library system has partnered with LinkedIn Learning (formerly Linda.com). Not everyone learns best from reading books or documentation, and YouTube can be hit or miss. It’s news to just about everyone I speak with that our library offers this content for free.
When I started freelancing, I got a cheap multi-function inkjet printer. I still use it for scanning, but it turns out that I print things too infrequently. Ink is expensive, and I found it was often dried out when I wanted to use it. These days I print everything at the library. Everyone with a card gets 50 pages a month for free, and it’s $0.10 a page after that.
I’ve only used this service twice, but it’s been extremely helpful for prototyping. With my library card, I have access to 3D printing services at the main branch, a engineering campus at a local university, and also a few others. The cost is based on time, weight, or the amount of filament used. If you’re looking to give 3D printing a shot, this might be a better way to go than an expensive device you’ll use a handful of times.
This kind of falls in the category of “normal library stuff”, but Libby is an astonishingly well designed app. Just enter your library card details and start reading books, listening to audiobooks, browsing magazines, or checking out manga/graphic novels. You may need to be added to a waitlist, but you can sign up for notifications when things are available. Libby works with just about all libraries in North America. If I wasn’t already paying stupid money for Apple One, this is how I’d read Edge and Retro Gamer.
Aside from physical video media that you can borrow from most libraries, many provide access to streaming services. A common one is Kanopy, but some services like Hoopla also offer streaming video.
This might not matter to some, but I still like having access to traditional news media. Our library offers Pressreader, giving card holders the ability to read digital hard-copies of hundreds of papers from around the world.
Libraries offer a lot of services to the community, but the space itself can be useful. Obviously they can be places to work and research, but they also often hold events. My library holds regular small business sessions and helps with startup financing. Heck, a nearby library also offers a D&D drop-in that I just learned about while writing this. Also, if you’re researching something, librarians kinda do this for a living. There’s a good chance they know about other resources you might not. At least at my library, they’re great folks!
I want to say that I feel privileged to live just a few blocks from an award winning library. That said, I grew up in a small town with a library that also offers most of these services. Also, you may have access to more than just one library. In Halifax, my library card gives me access to many services from the three university libraries in walking distance.
I personally think Halifax is special, but it’s a pretty small city. Check out what your local library has to offer. You might be surprised!
]]>I hadn’t checked back on the site in a while, but it’s been improved recently. Today I saw a Mastodon post announcing individual stats pages for every site the directory tracks. I love this feature!
]]>