Search Results

Search found 10433 results on 418 pages for 'session replication'.

Page 59/418 | < Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. We'd like to avoid any kind of local storage (share a disk on a desktop or something) since we're a geographically distributed team). So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). FTP and other client-based apps don't seem to support this at all. Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. In my mind, the magical tool would be some combination of TrueCrypt and rsync. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • best practice to removing DC from Site that no longer connects via vpn in another city

    - by dasko
    hi i am looking for a recap of what i have done already to see if i missed anything. i had two cities connected by wan using a ipsec persistent tunnel between gateways. i had one DC (DOMAIN CONTROLLER) in each city that was a global catalog server (GC) they were set up to replicate and i had them configured under Sites and Servers with their own subnet etc... about 6 months ago the one city was removed and i was not able to gracefully remove, through dcpromo, the server that was there. it is no longer used and cannot be brought back. the company went from two sites down to single site. Problem is i had a whole bunch of kcc errors and replication bugs in the event viewer. i wanted to clean up my active directory and decided to use the ntdsutil metadata cleanup commands. i removed the server from the specifed site based on a procedure from petri website. I then removed the instances of the old DC and site from Sites and Servers. Then i went and cleaned up the DNS by removing Host A records, NS server name from both the local DNS forward lookup zone and the _msdcs i also removed the reverse lookup zone for the subnet that no longer exists. is there anything i missed? thanks in advance for any help. gd

    Read the article

  • MySQL is killing the server IO.

    - by OneOfOne
    I manage a fairly large/busy vBulletin forums (running on gigenet cloud), the database is ~ 10 GB (~9 milion posts, ~60 queries per second), lately MySQL have been grinding the disk like there's no tomorrow according to iotop and slowing the site. The last idea I can think of is using replication, but I'm not sure how much that would help and worried about database sync. I'm out of ideas, any tips on how to improve the situation would be highly appreciated. Specs : Debian Lenny 64bit ~12Ghz (6x2GHz) CPU, 7520gb RAM, 160gb disk. Kernel : 2.6.32-4-amd64 mysqld Ver 5.1.54-0.dotdeb.0 for debian-linux-gnu on x86_64 ((Debian)) Other software: vBulletin 3.8.4 memcached 1.2.2 PHP 5.3.5-0.dotdeb.0 (fpm-fcgi) (built: Jan 7 2011 00:07:27) lighttpd/1.4.28 (ssl) - a light and fast webserver PHP and vBulletin are configured to use memcached. MySQL Settings : [mysqld] key_buffer = 128M max_allowed_packet = 16M thread_cache_size = 8 myisam-recover = BACKUP max_connections = 1024 query_cache_limit = 2M query_cache_size = 128M expire_logs_days = 10 max_binlog_size = 100M key_buffer_size = 128M join_buffer_size = 8M tmp_table_size = 16M max_heap_table_size = 16M table_cache = 96 Other : From the cloud's IO chart, we're averaging 100mb/s read. > vmstat procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu---- r b swpd free buff cache si so bi bo in cs us sy id wa 9 0 73140 36336 8968 1859160 0 0 42 15 3 2 6 1 89 5 > /etc/init.d/mysql status Threads: 49 Questions: 252139 Slow queries: 164 Opens: 53573 Flush tables: 1 Open tables: 337 Queries per second avg: 61.302. moved from superuser

    Read the article

  • Force database read to master if slave data is stale

    - by Jeff Storey
    I previously asked a specific question about this database replication for new user signup to which I got an answer, but I want to ask this in the more general sense. I have a database setup in which I am using a master/slave combination. I am using the slaves for load balancing (the data itself is partitioned/sharded across multiple databases, but each database has X slaves for load balancing). Let's say I write some data to the master. Now I do a subsequent read which hits a slave, but the slave has not yet caught up to the master. Is there a way (which can be done quickly since it will happen frequently) to determine if the data is stale in the slave so I can then route to the master? In my previous question, it was suggested to do simultaneous writes to the cache and the database. This solution seems practical, but there is still a chance that the data may have been removed from the cache but not yet updated in the slave. A possible solution is to ensure the cache is big enough (based on the typical application load) so the data will not be evicted within the time frame it takes to replicate the data. This seems like it may be feasible. Can anyone provide additional insight into this question? Thanks!

    Read the article

  • Failover strategy for a 4 servers scenario

    - by Joao Villa-Lobos
    Hi all, I am trying to figure out how to set up replication & failover in a scenario with 4 servers (2 per location) where any server may assume the Master role. My initial scenario is the following one: 2 servers in location A (One Master, One Slave); 2 servers in location B (Two Slaves). For this I'm thinking on using the configuration Master-Master Active-Passive suggested on O'Reilly's "High Performance MySQL" on all of them so each one can become a Master when needed. If the Master "dies" the other server from location A assumes the Master role whenever possible. It will always have a bigger priority then the servers on location B. A server on location B will only switch to Master if no server on location A is able to do so. Since MySQL can't handle this automatically I need some other way to implement this. I've read already about heartbeat and Maatkit. Is this the way to go? Has anyone used this in a similar scenario? Is there some other way to go in order to achieve this? Any pointers about failout will be appreciated. I want to keep this as simple as possible avoiding stuff such as DRDB. I'm not concerned about high availability just a way to switch roles automatically without too many hassle. I'm using SuSe Enterprise 10 and MySQL 5.1.30-community. Thanks in advance, João

    Read the article

  • Can I replicate data between mySQL and SQL Server/SQL Azure?

    - by Ernest Mueller
    I have a replicated mySQL setup running happily on Amazon AWS, making user data available locally in various regions. Now I'm faced with an app that needs to go up on Microsoft Azure and I need to replicate the data over to there as well. So that's annoying. I am faced with several options: Replicate from mySQL to SQL Azure/SQL Server seems like it would be lovely - is this possible? I'd consider using a third party tool and paying $$ if I had to. We're not using anything complicated in the db feature set, it's just data in tables. Get mySQL working on Microsoft Azure - which seems really dicey at best. All the HOWTOs I can find say "this is possible but you really shouldn't try this for production apps." Go non-realtime and do syncs from mySQL to SQL Azure, which may be somewhat expensive and slower. Rip out all my mySQL on Amazon and use SQL Server there, which would make Baby Jesus cry. Has anyone gotten mySQL to SQL Azure/SQL Server replication or syncing working? Or have any other approaches (a NoSQL solution that replicates and might meet our but-we-need-to-join-some-tables needs that can easily be run on Amazon and Azure)?

    Read the article

  • Mysql master-master not replicating

    - by frankil
    I'm setting up a master-master mysql replication on two servers (db1 and db2). I started with setting up db2 as a slave to db1 and that works fine. But when I set up db1 as a slave to db2 it isn't replicating. On the face of it everything looks fine but the data isn't replicating. There are no errors in either of the error logs. The slave status is updating the bin log position. I have used mysqlbinlog to examine both the binlog on the db2 and the relay log on db1 and all of the queries are going in there, but not being executed to db1. "show slave status" on both servers shows that both the slave io and sql threads are "Yes" and that the relay log position is updated by the sql thread. Also on both servers: >echo "show processlist" | mysql | grep "system user" 166819 system user NULL Connect 3655 Waiting for master to send event NULL 166820 system user NULL Connect 3507 Has read all relay log; waiting for the slave I/O thread to update it NULL Relevant config for db1: server-id = 1 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 1 master-host = db2 master-port = 3306 master-user = slaveuser master-password = *** skip-slave-start sync_binlog = 1 binlog-ignore-db=mysql Config for db2 server-id = 2 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 2 master-host = db1 master-port = 3306 master-user = slaveuser master-password = *** sync_binlog = 1 relay-log=mysql-relay-bin binlog-ignore-db=mysql What else can I look for to make sure db1 executes the queries from db2?

    Read the article

  • Question About mk-table-checksum Results

    - by stevenmusumeche
    Hello, I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • Replacing DropBox with: Amazon S3 + SSL + GPG/TrueCrypt + Mounting on OSX ??

    - by Matt Rogish
    So, right now we're using DropBox to share various data files around between approximately 10 Mac OS X systems. However, we already have an S3 account and everyone on the lowest DropBox plan of $10/mo seems too expensive. So, I am contemplating something that would allow us to replace DropBox with our own home-grown solution. We are all fairly technical people and/or smart enough to follow some steps, so if it's not as "user friendly" as DropBox we're all comfortable with that. There are plenty of docs out there that have bits and pieces of what I want but some of the tools don't seem to fit the requirements: Transport security via SSL to the bucket Encryption of bucket contents Bi-directional syncing Most of the scripts I can find on the internet use "duplicity" which appears to fail #1 (it doesn't look like duplicity supports SSL to S3 - the docs don't state but the protocol looks plain old http http://www.nongnu.org/duplicity/duplicity.1.html#sect6 ) Many scripts use gpg to encrypt files. This seems like it could work, however I have to make sure that each OSX client is able to use the same key to encrypt and decrypt files (key management is left to me to manage). Finally, most of the scripts use one-way replication, e.g. using Amazon S3 as a simple backup store. As we'd be using Amazon S3 as the "repository" they fail this one. Whew. So, I'd love a single tool that does this but after an exhaustive search I don't think one exists. I'd be happy just knowing which tools out there can fulfill my 3 requirements, after that I can stitch together the rest. Any thoughts? THANKS!

    Read the article

  • A few tables are still out of sync after running mk-table-sync

    - by smusumeche
    I have 1 master and 2 slaves. I am using MySQL 5.1.42 on all servers. I am attempting to use mk-table-checksum to verify that their data is in sync, but I am getting unexpected results on one of the slaves. First, I generate the checksums on the master like this: mk-table-checksum h=localhost --databases MYDB --tables {$table_list} --replicate=MYDB.mk_checksum --chunk-size=10M My understanding is that this runs the checksum queries on the master which then propagate via normal replication to the slaves. So, no locking is needed because the slaves will be at the same logical point in time when they run the checksum queries on themselves. Is this correct? Next, to verify that the checksums match, I run this on the master: mk-table-checksum --databases MYDB --replicate=IRC.mk_checksum --replicate-check 1 h=localhost,u=maatkit,p=xxxx If there are any differences, I repair the slaves like this: mk-table-sync --execute --verbose --replicate IRC.mk_checksum h=localhost,u=maatkit,p=xxxx After doing all of this, I repaired both slaves with mk-table-sync. However, everytime I run this sequence (after everything has already been repaired), one slave is perfectly in sync but one slave always has a few tables out of sync. I am 99.999% sure that the data on the slaves matches, since I repaired everything and the tables were not even updated on the master between runs of the checksum script. What would cause a few tables to always show out of sync on only one of the slaves? I am stuck. Here is the output: Differences on h=x.x.x.x,p=...,u=maatkit DB TBL CHUNK CNT_DIFF CRC_DIFF BOUNDARIES IRC product 10 0 1 product_id = 147377 AND product_id < 162085 IRC post_order_survey 0 0 1 1=1 IRC mk_heartbeat 0 0 1 1=1 IRC mailing_list 0 0 1 1=1 IRC honey_pot_log 0 0 1 1=1 IRC product 12 0 1 product_id = 176793 AND product_id < 191501 IRC product 18 0 1 product_id = 265041 IRC orders 26 0 1 order_id = 694472 IRC orders_product 6 0 1 op_id = 935375

    Read the article

  • Good Enough Failover Strategy for DNS / MySQL / Email

    - by IMB
    I've asked and read a lot questions regarding DNS failover but the more I read the more complicated it becomes, some people say it's good enough some say it isn't. No clear answers from what I read. I was wondering if we can set it straight once and for all, at least for the requirements of most websites out there. Right now let's assume the following: We don't need really need load-balancing, what we need is a failover solution. We are running a website based on LAMP on a VPS. We need to make sure that the Web Server, MySQL, Email are always accessible if not 99%. Basically here's my idea and questions about it: Web Server: We need at least one failover server (another VPS on a separate data center). Is DNS Failover via Round Robin good, if not, what's the best? And how do you exactly implement it? How do you make the files you upload/delete on Server A is also on Server B? MySQL: I've only read a brief intro to MySQL replication and I assume that I can replicate Server A to Server B and vice versa on the fly right? So just it case Server A fails and Server B is now running, it will continue to work and replicate to Server A when it becomes available. So in essence Server B is now the primary server, and will later on failover to Server A, should a failure happen again. Email: If we are gonna use DNS Failover, using webmail or relying on emails stored on the server is probably not a good idea right? Since some emails might be on Server A while some might be on Server B? I assume a basic email forwarder to a 3rdparty is good enough (like Gmail for example) to ensure all emails are kept in one place. Here's a basic diagram for a better picture: http://i.stack.imgur.com/KWSIi.png

    Read the article

  • MySQL slave server from dumps

    - by HTF
    I've created a slave server from live machine which is acting as a master now. I use the following procedure to create it: mysqldump --opt -Q -B --master-data=2 --all-databases > dump.sql then I imported this dump on the new machine, applied the "CHANGE MASTER TO..." directive with a log file/position from the dump. Please note that I have around 8000 databases and I didn't stop the master while the dumps were running. The replication works fine but is this a properly method for creating a slave server? I'm planning to promote this slave to a master (different location) so I would like to make sure that there is a 100% data consistency between the servers. I've found this article where it says: The naive approach is just to use mysqldump to export a copy of the master and load it on the slave server. This works if you only have one database. With multiple database, you'll end up with inconsistent data. Mysqldump will dump data from each database on the server in a different transaction. That means that your export will have data from a different point in time for each database. Thank you

    Read the article

  • KVM Hosting: How to efficiently replicate guests

    - by javano
    I have three KVM servers each with 1 guest VM, running directly on it's local storage, (so they are essentially getting a dedicated box worth of computing power each). In the event of a host failure I would like the guests replicated to at least one of the other hosts so I can spin it up there, until the failing host is fixed. I am curious about KVM cloning. I can clone a VM live or when it's suspended/shutdown. Obivously suspended VMs will naturally be quicker to clone but these three VMs comprise three parts of a single solution, so I don't want to ever have any one of them shutdown. How can I efficiently clone these VMs between servers? I have had a couple of ideas, but are these insane or, is there a better method I have missed for my scenario? Set up a DRDB partition between box 1 and 2 where VM 1 runs from, and so is replicated between box1 and box 2, repeat between box 2 & 3, and box 3 & 1 (This could be insane, I have never used DRDB only read about it) Just use standard KVM CLI clone options to perform live clones (I'm dubious about this because I don't know how long it will take and what the performance impact will be during) Run a copy of each VM on at least one other host, and have the guest on one host export it's data to the matching guest on another host where it can import that data, scripting this on the guest) Some of other way? Ideas welcome! Side Note These servers have 4x15k SAS drives in a RAID 10 so they aren't rocketing fast, and as I mentioned, each VM runs from the host's local storage, no NAS or SAN etc. So that is why I am asking this question about guest replication. Also, this isn't about disaster recovery. Guests will be exporting their data to a NAS over a VPN, so I am looking at how I can have them quickly spun up in a host failure situation.

    Read the article

  • What is the best time to set the IP address for a server headed to a server colocation facility?

    - by jim_m_somewhere
    What is the best time to set the IP address for a server? I have a server that I am going to install the OS on and then I am going to send it to a server colocation facility. The server is going to have Internet facing services (www, email, etc.) I can set up a "fake" IP address during install (by fake I mean private as in RFC 1918) and change the "fake" IPs to the real IPs once I set up the colocation service. The other option is to set up the colocation service...wait for them to give me the "real" IPs and use them during the OS install. The ramification are that...if I use "fake" IPs during install...I will have to wait before I set up things like SSL certs. If I wait for IPs from the colocation provider...then I can set up SSL certs that use the "correct" (as in "real") IP addresses...no changes to the certs until they expire. Do the "gotchas" of changing an IP address on a server outweigh the benefits of a quick install? The other danger with using "fake" IPs is that I could make a mistake when I go through the various files to change the IP address to the "live" IP address. Server OS: CentOS 6.2 or CentOS 6.3, 64 bit. Apps: Apache 2.4.X httpd, MySQL 5.X (will eventually use replication)

    Read the article

  • SQLAuthority News – Don’t Be Afraid To Fool The World – Video by John Sonmez

    - by Pinal Dave
    Sometime some words and statements grabs your attention and it is hard to stop thinking about that after a while. Something similar happened a few days ago when I read the twitter statement of my friend and Pluralsight author John Sonmez. He twitted few days ago very interesting statement. “I don’t know a single successful person, who doesn’t deep down think that have the world fooled. #fooltheworld” by John Sonmez. When I read it, I was extremely intrigued by this statement. I read it many times, I shared with my family and I just could not stop interpreting this statement. It was indeed fun to read it again and again and there are so many different meanings one can take away from the statement. I know John very well, he is a  wonderful person and have very positive energy for the life. I just had to request him to build a video around it. Right after 5 days of my request, John created a wonderful video around this subject. I watched it multiple times as it was a wonderful video. I am not going to write about what was in the video much as I suggest you to watch the video itself. Here is one of the personal stories I want to share which is absolutely relevant to this video. I think my story 100% resonant the story of John. A Real Story from My Past Three years ago, I submitted a session in one of the SharePoint conference as a SQL Server session. My session was accepted and I prepared it very well. I put more than 2 month’s time to prepare for the session and I was very excited to present the session. I reached to the event place traveling thousands of the miles and I was very much excited to present the session. However, there was a little mixed up in the session. There were multiple session which were similar to my session title. One of the other speakers also had proposed a database related session and was selected. When the material went to print the printing team got confused and by mistake swapped the sessions. The other speaker got Performance with SQL Server session and I had received Performance with SharePoint session. IT was indeed a big mixed up but now that is how it was in the event guide and it was marketed the same way everything in the event. A Big Mix Up I had to talk with the event organizer and we come to the conclusion that we all had good intention but things just got mixed up and now was the time when “The show must go on“. I had a great amount of hesitation to go and present the session as I had personally never worked with Sharepoint so close in my life and my session abstracted talked about SharePoint tricks in depth. Two hours before the session I took the help of one of my friend and installed the SharePoint on my box. He showed me a few things here and there but it was never a good enough time to learn everything which I wanted to learn. The Moments of Confidence I was very scared and nervous to go on the stage as a SharePoint was not something I felt comfortable. However, I decided to go on stage with confidence as a SharePoint expert. Though I did not know SharePoint at the best, I had confidence that whatever I know is correct and I will not misguide people. I had no intention to fool people but I had no intention to accept that I am a fool and you all wasted your time and money to dedicate your time to attend my session. I decided to be honest but at the same time decided to take the session beyond my expertise. The sixty minutes of the session went very fine and I was able to manage all the difficult question at a satisfactory level. When the session was over my feeling was that I would have not presented or talked any different if I had more knowledge of the SharePoint at that time. I think it was one of my best sessions and it was reflected in the session feedback as well. I was the best speaker across all the track and my session had highest ranking. I was delighted and I learned a very valuable lesson. I must go beyond my limits and knowledge. I must aim higher and work harder. I should not lie but I should have confidence that I have a good heart and I put 100% in my efforts.  Lessions Learned Since this incident I have learned a lot about SharePoint and I am now a regular speaker at various SharePoint conferences along with SQL Server sessions. I am motivated and I am not afraid. I know people have lots of expectation from me but I have learned not to judge myself before I do my best. I leave the judgement of my efforts to my audience. I do not take the burden of the feedback on me, even though I know my audience have expected from me. I know what I know and I put my best. I must go out, if I fail, I learn from my mistake but I must keep my progress trajectory very high. As John said in the video, sometime success is not something we can achieve 100% but we can keep on going near to it. As long as we do not lose our focus from our goal and do not deviate from our progress path, we are doing things right. Reference: Pinal Dave (http://blog.sqlauthority.com)  Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Could not initialize proxy - No Session again

    - by Iapilgrim
    Hi I get these error log when viewing a page ERROR [TP-Processor11] (LazyInitializationException.java:42) - could not initialize proxy - no Session org.hibernate.LazyInitializationException: could not initialize proxy - no Session at org.hibernate.proxy.AbstractLazyInitializer.initialize(AbstractLazyInitializer.java:132) at org.hibernate.proxy.AbstractLazyInitializer.getImplementation(AbstractLazyInitializer.java:174) at org.hibernate.proxy.pojo.javassist.JavassistLazyInitializer.invoke(JavassistLazyInitializer.java:190) at org.osmoz.contents.model.enm.ContentType_$$_javassist_71.getDefaultShortMode(ContentType_$$_javassist_71.java) at org.osmoz.contents.web.tapestry.components.EnmContentZone.getTemplate(EnmContentZone.java:67) at org.osmoz.contents.web.tapestry.base.AbstractRawContentZone.getContent(AbstractRawContentZone.java:67) at $PropertyConduit_1276091af82.get($PropertyConduit_1276091af82.java) at org.apache.tapestry5.internal.bindings.PropBinding.get(PropBinding.java:58) at org.apache.tapestry5.internal.structure.InternalComponentResourcesImpl$1.read(InternalComponentResourcesImpl.java:510) at org.apache.tapestry5.internal.structure.InternalComponentResourcesImpl$1.read(InternalComponentResourcesImpl.java:496) at org.apache.tapestry5.corelib.components.OutputRaw._$read_parameter_value(OutputRaw.java) at org.apache.tapestry5.corelib.components.OutputRaw.beginRender(OutputRaw.java:43) at org.apache.tapestry5.corelib.components.OutputRaw.beginRender(OutputRaw.java) at I know the problem is Session has been closed. But I really don't know why this error occur not so often that why I don't know the root cause is. Enviroment: Tapestry5, JPA, Hibernate 3.3.2.GA I've set <filter-class>org.springframework.orm.jpa.support.OpenEntityManagerInViewFilter</filter-class> in web.xml also

    Read the article

  • FBConnect for iPhone: sessionDidNotLogin, sessionDidLogout, session didLogin not called the second t

    - by Irene
    My problem is very similar to this question, however I am posting a new one, as the answer to the aforementioned does not seem to solve my problem. I have a multiview application - the first view is where the user logs in to Facebook, and the second where he picks an image and uploads it there. The first time the app runs, everything works fine, however if I return to the login view and press logout, then any calls to sessionDidNotLogin, sessionDidLogout or session didLogin don't seem to work. I found out that the first time, if I NSLog(@"%@",session.delegates); I have 2; my LoginViewController and the FBLoginButton. However, apart from that first time, the same log prints only the LoginViewController and not the FBLoginButton. I guess this is connected somehow, but I don't know how to solve it. Do I have to manually add the FBLoginButton to the session delegates, or I'm doing something else wrong here? Thank you for any help/suggestion.

    Read the article

  • Enable Session state in sharePoint 2010

    - by Albert D. Kallal
    I setup a test box computer with server 2008 (standard edition, not R2 and not hyper-v editing). I then installed SharePoint 2010. I was amazed how easy the whole setup went (the prerequisites setup on the SharePoint disk made this process oh so easy – great install system). Really this was just so easy. This test box is being used for testing Access web services. I am able to well publish access applications to this test server and Access applications publish and run just fine on the web SharePoint site through an web browser. However, the only thing that does not work is when I launch a Access report. The error message I get back is This report failed to load because session state is not turned on. Here is a screen shot: I can’t seem to find the setting anywhere to turn session state on. Any hints or links on how to enable session state in SharePoint 2010 would be most appreciated.

    Read the article

  • IIS6: PHP Sessions

    - by Alerty
    I have installed PHP to work with IIS6 (with FastCGI). I am capable of viewing a sample test website that shows the PHP info with the following code: <?php phpinfo(); ?> Now that this works I tried to migrate my PHP website to IIS6 and here is a list of the errors/warnings I got: PHP Warning: session_start(): open(C:\WINDOWS\Temp\sess_rjbv0ialf7uf03to69q1e4l101, O_RDWR) failed: Permission denied (13) in C:\Site\index.php on line 11 PHP Warning: Unknown: open(C:\WINDOWS\Temp\sess_rjbv0ialf7uf03to69q1e4l101, O_RDWR) failed: Permission denied (13) in Unknown on line 0 PHP Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (C:\WINDOWS\Temp) in Unknown on line 0 After seeing this, I corrected the php.ini file to set correctly the session date value: session.save_path="C:\WINDOWS\Temp" Yet doing so has done nothing! How can I make it work?

    Read the article

  • How do I pass a cookie to a Sinatra app using curl?

    - by Brandon Toone
    I'm using the code from the example titled "A Slightly Bigger Example" from this tutorial http://rubylearning.com/blog/2009/09/30/cookie-based-sessions-in-sinatra/ to figure out how to send a cookie to a Sinatra application but I can't figure out how to set the values correctly When I set the name to be "brandon" in the application it creates a cookie with a value of BAh7BiIJdXNlciIMYnJhbmRvbg%3D%3D%0A which is a url encoding (http://ostermiller.org/calc/encode.html) of the value BAh7BiIJdXNlciIMYnJhbmRvbg== Using that value I can send a cookie to the app correctly curl -b "rack.session=BAh7BiIJdXNlciIMYnJhbmRvbg==" localhost:9393 I'm pretty sure that value is a base64 encoding of the ruby hash for the session since the docs (http://rack.rubyforge.org/doc/classes/Rack/Session/Cookie.html) say The session is a Ruby Hash stored as base64 encoded marshalled data set to :key (default: rack.session). I thought that meant all I had to do was base64 encode {"user"=>"brandon"} and use it in the curl command. Unfortunately that creates a different value than BAh7BiIJdXNlciIMYnJhbmRvbg==. Next I tried taking the base64 encoded value and decoding it at various base64 decoders online but that results in strange characters (a box symbol and others) so I don't know how to recreate the value to even encode it. So my question is do you know what characters/format I need to get the proper base64 encoding and/or do you know of another way to pass a value using curl such that it will register as a proper cookie for a Sinatra app?

    Read the article

  • No Hibernate Session bound to thread grails

    - by naresh
    Actually we've lot of quartz jobs in our application. For some time all of the jobs work fine. After some time all jobs are throwing the following exception. org.quartz.JobExecutionException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here [See nested exception: java.lang.IllegalStateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here] at grails.plugins.quartz.QuartzDisplayJob.execute(QuartzDisplayJob.groovy:37) at org.quartz.core.JobRunShell.run(JobRunShell.java:202) at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:573) Caused by: java.lang.IllegalStateException: No Hibernate Session bound to thread, and configuration does not allow creation of non-transactional one here at grails.plugins.quartz.QuartzDisplayJob.execute(QuartzDisplayJob.groovy:29) ... 2 more

    Read the article

  • Sharing $_SESSION varaibles across subdomains using PHP

    - by scott
    Hi, I am trying to share the contents of the session variable across two subdomains but for some reason it is not working. The sessionid is exactly the same on both subdomains but the variables aren't available. I can achieve this with Cookies and this s working but would rather use the values in the session. Here is how I setting the domain for the session: Thanks, Scott

    Read the article

  • Prevent Ruby on Rails from sending the session header

    - by hurikhan77
    How do I prevent Rails from always sending the session header (Set-Cookie). This is a security problem if the application also sends the Cache-Control: public header. My application touches (but does not modify) the session hash in some/most actions. These pages display no private content so I want them to be cacheable - but Rails always sends the cookie header, no matter if the sent session hash is different from the previous or not. What I want to achieve is to only send the hash if it is different from the one received from the client. How can you do that? And probably that fix should also go into official Rails release? What do you think?

    Read the article

  • RuntimeException: Could not start Selenium session: Internal Server Error

    - by user79685
    I am trying to detect a midair collision problem (simultaneous editin) using selenium. So I start a selenium session A with following (Super Class) selenium = new MASSelenium(serverHost, serverPort, *iexplore, browserURL); selenium.start(); selenium.open("index.cgi"); then I try starting a different selenium session B pointing to a different browser from the superclass (Sub Class): selenium2 = new MASSelenium(getServerHost(), getServerPort(), *firefox, getBrowserURL()); selenium2.start(); selenium2.open("index.cgi"); It works fine on my local machine (behaves as expected) but then when i run this same test on a remote machine (using bamboo build tool), i get this exception: java.lang.RuntimeException: Could not start Selenium session: Internal Server Error at com.thoughtworks.selenium.DefaultSelenium.start(DefaultSelenium.java:89) at gov.baba.arc.mas.selenium.tests.SimultaneousEditingConflictDetected.setUp(SimultaneousEditingConflictDetected.java:78) Caused by: com.thoughtworks.selenium.SeleniumException: Internal Server Error at com.thoughtworks.selenium.HttpCommandProcessor.throwAssertionFailureExceptionOrError(HttpCommandProcessor.java:97) at com.thoughtworks.selenium.HttpCommandProcessor.getCommandResponseAsString(HttpCommandProcessor.java:168) at com.thoughtworks.selenium.HttpCommandProcessor.executeCommandOnServlet(HttpCommandProcessor.java:104) at com.thoughtworks.selenium.HttpCommandProcessor.doCommand(HttpCommandProcessor.java:86) Any idea why this is happening?

    Read the article

  • Java servlet and JSP accessing the same session bean

    - by Mykola Golubyev
    Let's say I have simple Login servlet that checks the passed name and creates User object and stores it in a session. User user = new User(); user.setId(name); request.getSession().setAttribute("user", user); response.sendRedirect("index.jsp"); In the index.jsp page I access the user object through jsp:useBean <jsp:useBean id="user" scope="session" class="package.name.User"/> <div class="panel"> Welcome ${user.id} </div> It works so far. The question: is this a valid usage or it is just current implementation uses the same name as the jsp bean id when stores and looks for a bean in a session?

    Read the article

< Previous Page | 55 56 57 58 59 60 61 62 63 64 65 66  | Next Page >