• 2 Posts
  • 175 Comments
Joined 1 month ago
cake
Cake day: June 6th, 2025

help-circle
rss
  • Was the vaginismus like involuntary contractions that created too much resistance for the dilator?

    Not contractions, more like a permanent contraction. Well, semi permanent, as it resolved eventually. But until then, it was contracted tight 24/7

    Has there been any thought about seeing the surgeon again or looking into rehabilitation, or is it easier to have moved on?

    My surgeon retired, as did one of the others, leaving a single Australian surgeon with a long wait list, with no guarantees he could fix it, especially given it was another surgeons work. In theory though, I could get PPV with him, but still an uncertain outcome at the end of a long wait list.

    In theory I’ll do it one day, but the older I get, the less likely I’ll bother

    But I’m impressed by your healthy and adaptive mindset about it.

    Heh. That’s me with the benefit of time. When I was going through it, I didn’t have a healthy adaptive mindset


  • My case was a bit more complicated than my previous post outlined. After a year or so of dilating, and having difficulties, I went back to my surgeon for a followup when I was supporting a friend who was having surgery with the same surgeon.

    He had a look at things, and confirmed the presence of scarring causing the issue, but his examination also triggered something like vaginismus, and it became impossible for me to dilate with anything but the smallest dilator for a month or so after. By the time that issue resolved, I had lost more depth and girth from the lack of meaningful dilation. So I spent some time trying to regain what I had lost, but ultimately getting back to where I was before the vaginismus would have taken months, with no guarantees of success, but given that even full success was back to a starting point that wasn’t working for me, it become really hard to keep dilating.

    And then covid happened, and my surgeon cleared his lists and stopped taking bookings. I wasn’t able to have penetrative sex anyway even before any of this happened, so at that point, I couldn’t see the point in continuing.

    It was a pretty big deal for me, and it left me feeling crushed. But, the way I looked at it was that even though I couldn’t have the sex life I had hoped for, that was also true before I had bottom surgery, but bottom surgery left me feeling comfortable in my body in a way I’d never had access to before, so I was still better off than where I started, even if I didn’t end up where I had hoped to.



  • Basically, after my healing was done, it just never got any better. No matter how much I dilated, the largest dilator was never comfortable, and with the effort it took to use that, nothing made out of flesh and blood instead of rigid plastic was going to stand a chance.

    Because it was never comfortable, and I was never able to have penetrative sex, I ended up just giving up on dilation during the covid lockdowns.


  • I had a very different experience unfortunately. It turns out that I had quite a bit of internal scarring, so dilation was never pleasant for me. It wasn’t hard to do, but it didn’t feel comfortable. Sort of like stretching a piercing. It was tense and uncomfortable.

    Still, despite that, it was a life changing experience, and I’d do it again every time if I had the choice!




  • Since I last commented, the queue has jumped from about 9000 outstanding items, to 15,000 outstanding items, and it appears that I have timelines for a large amount of my history now.

    However, the estimated time is still slowly creeping up (though only by a minute or two, despite adding 6000 more items to the queue).

    I haven’t uploaded anything manually that might have triggered the change in queue size.

    Is there any external calls made during processing this queue that might be adding latency?

    tl;dr - something is definitely happening






  • It’s a 1gig json file that has about 10 years of data. I get multiple repeats of the rabbit timeout in the logs. The Job Status section tells me that it’s got just under 9 hours of processing remaining for just over 16,000 in the stay-detection-queue. The numbers change slightly, so something is happening, but it’s been going for over 12 hours now, and the time remaining is slowly going up, not down.

    reitti-1  | 2025-07-04T03:06:17.848Z  WARN 1 --- [ntContainer#2-1] o.s.a.r.l.SimpleMessageListenerContainer : Consumer raised exception, processing can restart if the connection factory supports it
    reitti-1  |
    reitti-1  | com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - delivery acknowledgement on channel 9 timed out. Timeout value used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more, class-id=0, method-id=0)
    reitti-1  |     at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.checkShutdown(BlockingQueueConsumer.java:493) ~[spring-rabbit-3.2.5.jar!/:3.2.5]
    reitti-1  |     at org.springframework.amqp.rabbit.listener.BlockingQueueConsumer.nextMessage(BlockingQueueConsumer.java:554) ~[spring-rabbit-3.2.5.jar!/:3.2.5]
    reitti-1  |     at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.doReceiveAndExecute(SimpleMessageListenerContainer.java:1046) ~[spring-rabbit-3.2.5.jar!/:3.2.5]
    reitti-1  |     at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer.receiveAndExecute(SimpleMessageListenerContainer.java:1021) ~[spring-rabbit-3.2.5.jar!/:3.2.5]
    reitti-1  |     at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.mainLoop(SimpleMessageListenerContainer.java:1423) ~[spring-rabbit-3.2.5.jar!/:3.2.5]
    reitti-1  |     at org.springframework.amqp.rabbit.listener.SimpleMessageListenerContainer$AsyncMessageProcessingConsumer.run(SimpleMessageListenerContainer.java:1324) ~[spring-rabbit-3.2.5.jar!/:3.2.5]
    reitti-1  |     at java.base/java.lang.Thread.run(Unknown Source) ~[na:na]
    reitti-1  | Caused by: com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reply-code=406, reply-text=PRECONDITION_FAILED - delivery acknowledgement on channel 9 timed out. Timeout value used: 1800000 ms. This timeout value can be configured, see consumers doc guide to learn more, class-id=0, method-id=0)
    reitti-1  |     at com.rabbitmq.client.impl.ChannelN.asyncShutdown(ChannelN.java:528) ~[amqp-client-5.25.0.jar!/:5.25.0]
    reitti-1  |     at com.rabbitmq.client.impl.ChannelN.processAsync(ChannelN.java:349) ~[amqp-client-5.25.0.jar!/:5.25.0]
    reitti-1  |     at com.rabbitmq.client.impl.AMQChannel.handleCompleteInboundCommand(AMQChannel.java:193) ~[amqp-client-5.25.0.jar!/:5.25.0]
    reitti-1  |     at com.rabbitmq.client.impl.AMQChannel.handleFrame(AMQChannel.java:125) ~[amqp-client-5.25.0.jar!/:5.25.0]
    reitti-1  |     at com.rabbitmq.client.impl.AMQConnection.readFrame(AMQConnection.java:761) ~[amqp-client-5.25.0.jar!/:5.25.0]
    reitti-1  |     at com.rabbitmq.client.impl.AMQConnection.access$400(AMQConnection.java:48) ~[amqp-client-5.25.0.jar!/:5.25.0]
    reitti-1  |     at com.rabbitmq.client.impl.AMQConnection$MainLoop.run(AMQConnection.java:688) ~[amqp-client-5.25.0.jar!/:5.25.0]
    reitti-1  |     ... 1 common frames omitted
    








  • I don’t use bluesky or nostr for the very reasons I outlined in my comment, and I wouldn’t recommend them to anyone. Especially nostr, which is a shit hole.

    My point is though, they both do non centralised ID, giving similar benefits to what the OP is suggesting, without the centralisation they’re suggesting