Search Results

Search found 15797 results on 632 pages for 'session variables'.

Page 149/632 | < Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >

  • Stopping the manipulation of variables used for data collection?

    - by Ruinous
    I am working on a project in java and I was hoping to be able to collect statistics from the client and a possible problem that I fear will occur is the manipulation of the variables used for collection which will lead to illegitimate statistics. Is it in any way possible to prevent the manipulation of variables or is it always possible? For example: I want to log the actions made per hour from the client. The variable acting as a counter for the amount of actions performed is manipulated and a much larger amount is added to the counter. This data is then uploaded to the server (Of course using a multi-tier architecture to prevent even more possible problems) and considered 'legit.' Is there any way to prevent this?

    Read the article

  • how does tomcat like web container handle struts 2 variables ?

    - by mobby1982
    i am a newbie . i have a question regarding struts 2 framework and tomcat . i know that each request has it own thread , but my question is are the global variables defined in struts action shared amongst requests. for ex: if i have a global variable named say int pageNo; and i am using in say method called paginationAll() can i use the same variable (pageNo) for another method called say paginatonMaterialAll() in the same action or does each thread has its own set of variables even though globally defined?

    Read the article

  • 501 Error during Libjingle PCP on Amazone EC2 running Openfire

    - by AeroBuffalo
    I am trying to implement Google's Libjingle (version: 0.6.14) PCP example and I am getting a 501: feature not implemented error during execution. Specifically, the error occurs after each "account" has connected, been authenticated and began communicating with the other. An abbreviated log of the interaction is provided at the end. I have set up my own jabber server (using OpenFire on an Amazon EC2 server), have opened all of the necessary ports and have added each "account" to the other's roster. The server has been set to allow for file transfers. My being new to working with servers, I am not sure why this error is occur and how to go about fixing it. Thanks in advance, AeroBuffalo P.S. Let me know if there is any additional information needed (i.e. the full program log for either/both ends). Receiving End: [018:217] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [018:217] <iq to="[email protected]/pcp" type="set" id="5"> [018:217] <jingle xmlns="urn:xmpp:jingle:1" action="session-initiate" sid="402024303" initiator="[email protected]/pcp"> [018:217] <content name="securetunnel" creator="initiator"> [018:217] <description xmlns="http://www.google.com/talk/securetunnel"> [018:217] <type>send:winein.jpeg</type> [018:217] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [018:217] </description> [018:217] <transport xmlns="http://www.google.com/transport/p2p"/> [018:217] </content> [018:217] </jingle> [018:217] <session xmlns="http://www.google.com/session" type="initiate" id="402024303" initiator="[email protected]/pcp"> [018:217] <description xmlns="http://www.google.com/talk/securetunnel"> [018:217] <type>send:winein.jpeg</type> [018:217] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [018:217] </description></session> [018:217] </iq> [018:217] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:217] <presence to="[email protected]/pcp" from="forgesend" type="error"> [018:217] <error code="404" type="cancel"> [018:217] <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [018:217] </error></presence> [018:218] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:218] <presence to="[email protected]/pcp" from="forgesend" type="error"> [018:218] <error code="404" type="cancel"> [018:218] <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [018:218] </error></presence> [018:264] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:264] <iq type="result" id="3" to="[email protected]/pcp"> [018:264] <query xmlns="google:jingleinfo"> [018:264] <stun> [018:264] <server host="stun.xten.net" udp="3478"/> [018:264] <server host="jivesoftware.com" udp="3478"/> [018:264] <server host="igniterealtime.org" udp="3478"/> [018:264] <server host="stun.fwdnet.net" udp="3478"/> [018:264] </stun> [018:264] <publicip ip="65.101.207.121"/> [018:264] </query></iq> [018:420] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:420] <iq to="[email protected]/pcp" type="set" id="5" from="[email protected]/pcp"> [018:420] <jingle xmlns="urn:xmpp:jingle:1" action="session-initiate" sid="3548650675" initiator="[email protected]/pcp"> [018:420] <content name="securetunnel" creator="initiator"> [018:420] <description xmlns="http://www.google.com/talk/securetunnel"> [018:420] <type>recv:wineout.jpeg</type> [018:420] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [018:420] </description> [018:420] <transport xmlns="http://www.google.com/transport/p2p"/> [018:420] </content></jingle> [018:420] <session xmlns="http://www.google.com/session" type="initiate" id="3548650675" initiator="[email protected]/pcp"> [018:420] <description xmlns="http://www.google.com/talk/securetunnel"> [018:420] <type>recv:wineout.jpeg</type> [018:420] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [018:420] </description></session></iq> [018:421] TunnelSessionClientBase::OnSessionCreate: received=1 [018:421] Session:3548650675 Old state:STATE_INIT New state:STATE_RECEIVEDINITIATE Type:http://www.google.com/talk/securetunnel Transport:http://www.google.com/transport/p2p [018:421] TunnelSession::OnSessionState(Session::STATE_RECEIVEDINITIATE) [018:421] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [018:421] <iq to="[email protected]/pcp" id="5" type="result"/> [018:465] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [018:465] <iq to="[email protected]/pcp" id="5" type="result" from="[email protected]/pcp"/> [198:665] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:20:15 2012 [198:665] <iq type="get" id="162-10" from="forgejabber.com" to="[email protected]/pcp"> [198:665] <ping xmlns="urn:xmpp:ping"/> [198:665] /iq> [198:665] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:20:15 2012 [198:665] <iq type="error" id="162-10" to="forgejabber.com"> [198:665] <ping xmlns="urn:xmpp:ping"/> [198:665] <error code="501" type="cancel"> [198:665] <feature-not-implemented xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [198:665] </error> [198:665] </iq> Sender: [019:043] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [019:043] <iq type="get" id="3"> [019:043] <query xmlns="google:jingleinfo"/> [019:043] </iq> [019:043] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [019:043] <iq to="[email protected]/pcp" type="set" id="5"> [019:043] <jingle xmlns="urn:xmpp:jingle:1" action="session-initiate" sid="3548650675" initiator="[email protected]/pcp"> [019:043] <content name="securetunnel" creator="initiator"> [019:043] <description xmlns="http://www.google.com/talk/securetunnel"> [019:043] <type>recv:wineout.jpeg</type> [019:043] <client-cert>--BEGIN CERTIFICATE----END CERTIFICATE--</client-cert> [019:043] </description> [019:043] <transport xmlns="http://www.google.com/transport/p2p"/> [019:043] </content> [019:043] </jingle> [019:043] <session xmlns="http://www.google.com/session" type="initiate" id="3548650675" initiator="[email protected]/pcp"> [019:043] <description xmlns="http://www.google.com/talk/securetunnel"> [019:043] <type>recv:wineout.jpeg</type> [019:043] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [019:043] </description></session></iq> [019:043] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:043] <presence to="[email protected]/pcp" from="forgereceive" type="error"> [019:043] <error code="404" type="cancel"> [019:043] <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [019:043] </error></presence> [019:044] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:044] <presence to="[email protected]/pcp" from="forgereceive" type="error"> [019:044] <error code="404" type="cancel"> [019:044] <remote-server-not-found xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [019:044] </error></presence> [019:044] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:044] <iq to="[email protected]/pcp" type="set" id="5" from="[email protected]/pcp"> [019:044] <jingle xmlns="urn:xmpp:jingle:1" action="session-initiate" sid="402024303" initiator="[email protected]/pcp"> [019:044] <content name="securetunnel" creator="initiator"> [019:044] <description xmlns="http://www.google.com/talk/securetunnel"> [019:044] <type>send:winein.jpeg</type> [019:044] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [019:044] </description> [019:044] <transport xmlns="http://www.google.com/transport/p2p"/> [019:044] </content></jingle> [019:044] <session xmlns="http://www.google.com/session" type="initiate" id="402024303" initiator="[email protected]/pcp"> [019:044] <description xmlns="http://www.google.com/talk/securetunnel"> [019:044] <type>send:winein.jpeg</type> [019:044] <client-cert>--BEGIN CERTIFICATE--END CERTIFICATE--</client-cert> [019:044] </description></session></iq> [019:044] TunnelSessionClientBase::OnSessionCreate: received=1 [019:044] Session:402024303 Old state:STATE_INIT New state:STATE_RECEIVEDINITIATE Type:http://www.google.com/talk/securetunnel Transport:http://www.google.com/transport/p2p [019:044] TunnelSession::OnSessionState(Session::STATE_RECEIVEDINITIATE) [019:044] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:17:15 2012 [019:044] <iq to="[email protected]/pcp" id="5" type="result"/> [019:088] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:088] <iq type="result" id="3" to="[email protected]/pcp"> [019:088] <query xmlns="google:jingleinfo"> [019:088] <stun> [019:088] <server host="stun.xten.net" udp="3478"/> [019:088] <server host="jivesoftware.com" udp="3478"/> [019:088] <server host="igniterealtime.org" udp="3478"/> [019:088] <server host="stun.fwdnet.net" udp="3478"/> [019:088] </stun> [019:088] <publicip ip="65.101.207.121"/> [019:088] </query> [019:088] </iq> [019:183] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:17:15 2012 [019:183] <iq to="[email protected]/pcp" id="5" type="result" from="[email protected]/pcp"/> [199:381] RECV <<<<<<<<<<<<<<<<<<<<<<<<< : Thu Jul 5 14:20:15 2012 [199:381] <iq type="get" id="474-11" from="forgejabber.com" to="[email protected]/pcp"> [199:381] <ping xmlns="urn:xmpp:ping"/> [199:381] </iq> [199:381] SEND >>>>>>>>>>>>>>>>>>>>>>>>> : Thu Jul 5 14:20:15 2012 [199:381] <iq type="error" id="474-11" to="forgejabber.com"> [199:381] <ping xmlns="urn:xmpp:ping"/> [199:381] <error code="501" type="cancel"> [199:381] <feature-not-implemented xmlns="urn:ietf:params:xml:ns:xmpp-stanzas"/> [199:382] </error></iq>

    Read the article

  • How to rename multiple files by replacing word in file name geting from the shell script variables?

    - by fy6877
    This question like this thread. How to rename multiple files by replacing word in file name? My example is more complex than the above topic. The two variables are $name and $ newname getting from the shell script other location. $name and $ newname may have the unicode words or special symbles like []<?...etc,so could anyone help me to provide a method to add a part of script in shell scrit to solve file name replacing question. BTW,I try to type two kind of commands to change the part of file name, but it can't work. rename.ul '$name' '$newname' /home/fy6877/test/final/* ls /home/fy6877/test/final/|xargs -I$ rename.ul '$name' '$newname' $

    Read the article

  • Is it possible to have environment variables in the path of the working directory : PS1?

    - by mthpvg
    I am on Lubuntu and I am using bash. My PS1 (in .bashrc) is : PS1="\w> " I like it because I need to paste the working directory all the time. The problem is that the path is always very long and since I use terminator I only have half of my screen's width available to display it... it is ugly and annoying. My command prompt looks like that : /this/is/a/very/long/path/that/i/want/to/make/shorter > I'd like to set in my environment variables : $tiavl=/this/is/a/very/long And then I'll get : $tiavl/path/that/i/want/to/make/shorter > The goal is to have something shorter in the command prompt but I still want to be able to copy paste it and do : cd $tiavl/path/that/i/want/to/make/shorter It is a bit like with $HOME : ~/path/that/i/want/to/make/shorter > I know where I am and I can copy paste the ~. Thanks.

    Read the article

  • SQL SERVER – What is Spatial Database? – Developing with SQL Server Spatial and Deep Dive into Spati

    - by pinaldave
    What is Spatial Database? A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. (Source: Wikipedia) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to spatial aspect of data and how to integrate them with SQL Server this is the perfect session for you. Spatial is very special concept of SQL Server and I really like how it is implemented in SQL Server. In general Performance Tuning and Query Optimization is something I always have enjoyed in my professional life. Index are my best friends and many time, by implementing and many time by removing I have improved the performance of the system. In this session, I will be talking about Index along with Spatial Data. As Spatial Database is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Spatial Database One line definition Understanding Spatial Indexing Index Internals Query/Performance Tuning Query Hinting/Cost Analysis Spatial Index Catalog Views Performance Troubleshooting Finding Optimal Index using Spatial Index SP Common Errors Index Maintenance This slides decks will be followed by around 30 mins demo which will have story of geometry, geography, index internals and performance tuning. If you are interested in learning how GIS works and how SQL Server out of the box supports this wonderful tools, you will really like how the story is told. I am sure all people who attend the event will know how the Bangalore is positioned on the map of India. I will take example of Bangalore and Hyderabad and demonstrate how index can improve the performance. Well there are lots of story to tell in the session, and I will be opening this session with the beautiful script of Botticelli’s Birth of Venus created by Michael J. Swart. I will also demonstrate few real life scenario where I will be talking about Spatial Database and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session 3: Developing with SQL Server Spatial and Deep Dive into Spatial Indexing Date: April 14, 2010 Time: 5:00pm-6:00pm Microsoft SQL Server 2008 delivers new spatial data types that enable you to consume, use, and extend location-based data through spatial-enabled applications. Attend this session to learn how to use spatial functionality in next version of SQL Server to build and optimize spatial queries. This session outlines the new geography data type to store geodetic spatial data and perform operations on it, use the new geometry data type to store planar spatial data and perform operations on it, take advantage of new spatial indexes for high performance queries, use the new spatial results tab to quickly and easily view spatial query results directly from within Management Studio, extend spatial data capabilities by building or integrating location-enabled applications through support for spatial standards and specifications and much more. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Index, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: Spatial Database

    Read the article

  • Desktop login fails, terminal works

    - by Tobias
    I have a freshly setup 12.04 LTS pc system (120 GB SSD, 1 TB HDD, 16 GiB RAM); since a few days, I can't login to the graphical desktop anymore: there is very short flashing shell window which disappears very quickly, and I'm confronted with the login screen again. I believe there is something about modprobe and vbox, but I can't read it fast enough ... I can login to a terminal (Ctrl+Alt+F1). It did not help to chown all contents of my home directory to me:my-group, like suggested here. This is what I could find in /var/log, grepping for the date and time (I inserted linebreaks after <my-hostname>; real time values preserved): auth.log: <date> 22:43:01 <my-hostname> lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "tobias" <date> 22:43:08 <my-hostname> lightdm: pam_unix(lightdm:session): session closed for user lightdm <date> 22:43:08 <my-hostname> lightdm: pam_unix(lightdm:session): session opened for user tobias by (uid=0) <date> 22:43:08 <my-hostname> lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 <date> 22:43:08 <my-hostname> lightdm: pam_unix(lightdm:session): session closed for user tobias <date> 22:43:09 <my-hostname> lightdm: pam_unix(lightdm:session): session opened for user lightdm by (uid=0) <date> 22:43:09 <my-hostname> lightdm: pam_ck_connector(lightdm:session): nox11 mode, ignoring PAM_TTY :0 <date> 22:43:10 <my-hostname> lightdm: pam_succeed_if(lightdm:auth): requirement "user ingroup nopasswdlogin" not met by user "tobias" <date> 22:43:10 <my-hostname> dbus[756]: [system] Rejected send message, 2 matched rules; type="method_call", sender="1:43" (uid=104 pid=1639 comm="/usr/lib/indicator-datetime/indicator-datetime-ser") interface="org.freedesktop.DBus.Properties" member="GetAll" error name="(unset)" requested_reply="0" destination=":1.15" (uid=0 pid=1005 comm="/usr/sbin/console-kit-daemon --no-daemon ") kern.log: <date> 22:43:00 <my-hostname> kernel: [ 16.084525] eth0: no IPv6 routers present syslog: <date> 22:43:00 <my-hostname> kernel: [ 16.084525] eth0: no IPv6 routers present <date> 22:43:01 <my-hostname> ntpdate[1492]: adjust time server 91.189.94.4 offset -0.162831 sec <date> 22:43:08 <my-hostname> acpid: client 969[0:0] has disconnected <date> 22:43:08 <my-hostname> acpid: client connected from 1553[0:0] <date> 22:43:08 <my-hostname> acpid: 1 client rule loaded I have Virtualbox and Truecrypt installed, but I can't think of a reason why they might prevent a graphical login. I'm confused: What is this about requirement "user ingroup nopasswdlogin" not met? I do login using a password, and the password works ok when logging in to a terminal! Can I somehow read the error output, e.g. by delaying it, redirecting it to a file, or having the system prompt me for pressing a key? Has possibly any recent update caused my problem? Should I install the pending updates? How, btw, without access to the graphical UI? I have some working knowledge about the Linux shell, but I'm new to Ubuntu. Any help would be appreciated.

    Read the article

  • return the result of a query and the total number of rows in a single function

    - by csotelo
    This is a question as might be focused on working in the best way, if there are other alternatives or is the only way: Using Codeigniter ... I have the typical 2 functions of list records and show total number of records (using the page as an alternative). The problem is that they are rather large. Sample 2 functions in my model: count Rows: function get_all_count() { $this->db->select('u.id_user'); $this->db->from('user u'); if($this->session->userdata('detail') != '1') { $this->db->join('management m', 'm.id_user = u.id_user', 'inner'); $this->db->where('id_detail', $this->session->userdata('detail')); if($this->session->userdata('management') === '1') { $this->db->or_where('detail', 1); } else { $this->db->where("id_profile IN ( SELECT e2.id_profile FROM profile e, profile e2, profile_path p, profile_path p2 WHERE e.id_profile = " . $this->session->userdata('profile') . " AND p2.id_profile = e.id_profile AND p.path LIKE(CONCAT(p2.path,'%')) AND e2.id_profile = p.id_profile )", NULL, FALSE); $this->db->where('MD5(u.id_user) <>', $this->session->userdata('id_user')); } } $this->db->where('u.id_user <>', 1); $this->db->where('flag <>', 3); $query = $this->db->get(); return $query->num_rows(); } results per page function get_all($limit, $offset, $sort = '') { $this->db->select('u.id_user, user, email, flag'); $this->db->from('user u'); if($this->session->userdata('detail') != '1') { $this->db->join('management m', 'm.id_user = u.id_user', 'inner'); $this->db->where('id_detail', $this->session->userdata('detail')); if($this->session->userdata('management') === '1') { $this->db->or_where('detail', 1); } else { $this->db->where("id_profile IN ( SELECT e2.id_profile FROM profile e, profile e2, profile_path p, profile_path p2 WHERE e.id_profile = " . $this->session->userdata('profile') . " AND p2.id_profile = e.id_profile AND p.path LIKE(CONCAT(p2.path,'%')) AND e2.id_profile = p.id_profile )", NULL, FALSE); $this->db->where('MD5(u.id_user) <>', $this->session->userdata('id_user')); } } $this->db->where('u.id_user <>', 1); $this->db->where('flag <>', 3); if($sort) $this->db->order_by($sort); $this->db->limit($limit, $offset); $query = $this->db->get(); return $query->result(); } You see, I repeat the most of the functions, the difference is that only the number of fields and management pages. I wonder if there is any alternative to get as much results as the query in a single function. I have seen many tutorials, and all create 2 functions: one to count and another to show results ... Will there be more optimal?

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • Problems with SQL Server 2008 - "The client was unable to reuse a session with SPID 62, which had ..

    - by GrZeCh
    Hello, I'm having problems with my SQL Server 2008 installation (10.0.2531.0 - SP1 installed). It works as a database server for small hosting environment (about 500 sites). I'm getting errors like this: The client was unable to reuse a session with SPID 62, which had been reset for connection pooling. The failure ID is 29. This error may have been caused by an earlier operation failing. Check the error logs for failed operations immediately before this error message. in Windows event log and when I run this: SELECT * FROM sys.dm_os_performance_counters WHERE object_name = 'SQLServer:General Statistics' I see that one of counters looks a little odd: Logins/sec 429 Connection Reset/sec 163459 Logouts/sec 399 User Connections 30 Logical Connections 33 any ideas how to check what is causing this problem?

    Read the article

  • How do I merge a transient entity with a session using Castle ActiveRecordMediator?

    - by Daniel T.
    I have a Store and a Product entity: public class Store { public Guid Id { get; set; } public int Version { get; set; } public ISet<Product> Products { get; set; } } public class Product { public Guid Id { get; set; } public int Version { get; set; } public Store ParentStore { get; set; } public string Name { get; set; } } In other words, I have a Store that can contain multiple Products in a bidirectional one-to-many relationship. I'm sending data back and forth between a web browser and a web service. The following steps emulates the communication between the two, using session-per-request. I first save a new instance of a Store: using (new SessionScope()) { // this is data from the browser var store = new Store { Id = Guid.Empty }; ActiveRecordMediator.SaveAndFlush(store); } Then I grab the store out of the DB, add a new product to it, and then save it: using (new SessionScope()) { // this is data from the browser var product = new Product { Id = Guid.Empty, Name = "Apples" }); var store = ActiveRecordLinq.AsQueryable<Store>().First(); store.Products.Add(product); ActiveRecordMediator.SaveAndFlush(store); } Up to this point, everything works well. Now I want to update the Product I just saved: using (new SessionScope()) { // data from browser var product = new Product { Id = Guid.Empty, Version = 1, Name = "Grapes" }; var store = ActiveRecordLinq.AsQueryable<Store>().First(); store.Products.Add(product); // throws exception on this call ActiveRecordMediator.SaveAndFlush(store); } When I try to update the product, I get the following exception: a different object with the same identifier value was already associated with the session: 00000000-0000-0000-0000-000000000000, of entity:Product" As I understand it, the problem is that when I get the Store out of the database, it also gets the Product that's associated with it. Both entities are persistent. Then I tried to save a transient Product (the one that has the updated info from the browser) that has the same ID as the one that's already associated with the session, and an exception is thrown. However, I don't know how to get around this problem. If I could get access to a NHibernate.ISession, I could call ISession.Merge() on it, but it doesn't look like ActiveRecordMediator has anything similar (SaveCopy threw the same exception). Does anyone know the solution? Any help would be greatly appreciated!

    Read the article

  • NoSuchProviderException: smtp with log4j SMTP appender

    - by user1016403
    I am using log4j to send an email when there is an exception. below is my log4j properties file configuration. log4j.rootLogger=WARN, R, email log4j.appender.R=org.apache.log4j.ConsoleAppender log4j.appender.R.layout=org.apache.log4j.PatternLayout log4j.appender.R.layout.ConversionPattern=%d{HH:mm:ss} %-5p [%c{1}]: %m%n log4j.appender.email=org.apache.log4j.net.SMTPAppender log4j.appender.email.BufferSize=10 log4j.appender.email.SMTPHost=myhost.com [email protected] [email protected] log4j.appender.email.Subject=Error log4j.appender.email.layout=org.apache.log4j.PatternLayout mine is maven project i have added dependencies for mail.jar, activation.jar and smtp.jar. But on application server startup itself i get below error: [ERROR] log4j:ERROR Error occured while sending e-mail notification. [ERROR] javax.mail.NoSuchProviderException: smtp [ERROR] at javax.mail.Session.getService(Session.java:782) [ERROR] at javax.mail.Session.getTransport(Session.java:708) [ERROR] at javax.mail.Session.getTransport(Session.java:651) [ERROR] at javax.mail.Session.getTransport(Session.java:631) [ERROR] at javax.mail.Session.getTransport(Session.java:686) [ERROR] at javax.mail.Transport.send0(Transport.java:166) Am i missing any thing here? What is the root cause of the error? is it because of incorrect SMTP host name? or is it because of any missing/conflicting dependencies?

    Read the article

  • How do I properly unit test a Django session?

    - by thebossman
    The behavior of Django sessions changes between "standard" views code and test code, making it unclear how test code is written for sessions. Googling this yields two relevant discussions about this issue: Easier manipulation of sessions by test client test.Client.session.save() raises error for anonymous users I'm confused because both tickets have different ways of dealing with this problem and they were both Accepted. I assume this means they were patched and the behavior is now different. I also don't know to which versions these patches would pertain. If I'm writing a unit test in Django 1.0, how would I set up my session store for sessions to work as they do in the browser?

    Read the article

  • Can I send some text to the STDIN of an active process running in a screen session?

    - by Richard Gaywood
    I have a long-running server process inside a screen session on my Linux server. It's a bit unstable (and sadly not my software so I can't fix that!), so I want to script a nightly restart of the process to help stability. The only way to make it do a graceful shutdown is to go to the screen process, switch to the window it's running in, and enter the string "stop" on its control console. Are there any smart redirection contortions I can do to make a cronjob send that stop command at a fixed time every day?

    Read the article

  • NHibernate flush should save only dirty objects

    - by Emilian
    Why NHibernate fires an update on firstOrder when saving secondOrder in the code below? I'm using optimistic locking on Order. Is there a way to tell NHibernate to update firstOrder when saving secondOrder only if firstOrder was modified? // Configure var cfg = new Configuration(); var configFile = Path.Combine( AppDomain.CurrentDomain.BaseDirectory, "NHibernate.MySQL.config"); cfg.Configure(configFile); // Create session factory var sessionFactory = cfg.BuildSessionFactory(); // Create session var session = sessionFactory.OpenSession(); // Set session to flush on transaction commit session.FlushMode = FlushMode.Commit; // Create first order var firstOrder = new Order(); var firstOrder_OrderLine = new OrderLine { ProductName = "Bicycle", ProductPrice = 120.00M, Quantity = 1 }; firstOrder.Add(firstOrder_OrderLine); // Save first order using (var tx = session.BeginTransaction()) { try { session.Save(firstOrder); tx.Commit(); } catch { tx.Rollback(); } } // Create second order var secondOrder = new Order(); var secondOrder_OrderLine = new OrderLine { ProductName = "Hat", ProductPrice = 12.00M, Quantity = 1 }; secondOrder.Add(secondOrder_OrderLine); // Save second order using (var tx = session.BeginTransaction()) { try { session.Save(secondOrder); tx.Commit(); } catch { tx.Rollback(); } } session.Close(); sessionFactory.Close();

    Read the article

  • How do I get secure AuthSub session tokens in PHP ?

    - by robertdd
    I am using the Google/YouTube APIs to develop web application which needs access to a users YouTube account. Normal unsecure requests work fine and I can upgrade one time tokens to session tokens without any hassle. The problem comes when I try and upgrade a secure token to a session token, I get: ERROR - Token upgrade for CIzF3546351vmq_P____834654G failed : Token upgrade failed. Reason: Invalid AuthSub header. Error 401 i use this: function updateAuthSubToken($singleUseToken) { try { $client = new Zend_Gdata_HttpClient(); $client->setAuthSubPrivateKeyFile('/home/www/key.pem', null, true); $sessionToken = Zend_Gdata_AuthSub::AuthSubRevokeToken($sessionToken, $client); $client->setAuthSubToken($sessionToken); } catch (Zend_Gdata_App_Exception $e) { print 'ERROR - Token upgrade for ' . $singleUseToken . ' failed : ' . $e->getMessage(); return; } $_SESSION['sessionToken'] = $sessionToken; generateUrlInformation(); header('Location: ' . $_SESSION['homeUrl']); }

    Read the article

  • How to reconnect to Remote Session in a script?

    - by Lukasz
    Hello, I need to persist a Remote desktop connection across a reboot of a Terminal server. I'm thinking that it would be something like a scheduled task that would run periodically and check the running state of the session and restart it if it's down. BTW, I did check the "Reconnect..." checkbox on the advanced tab of the connection options, but it still goes down everytime we restart the terminal server. Does anyone have the script that would accomplish the above in a scheduled task, or perhaps another solution?

    Read the article

  • Storing simulation results in a persistent manner for Python?

    - by Az
    Background: I'm running multiple simuations on a set of data. For each session, I'm allocating projects to students. The difference between each session is that I'm randomising the order of the students such that all the students get a shot at being assigned a project they want. I was writing out some of the allocations in a spreadsheet (i.e. Excel) and it basically looked like this (tiny snapshot, actual table extends to a few thousand sessions, roughly 100 students). | | Session 1 | Session 2 | Session 3 | |----------|-----------|-----------|-----------| |Stu1 |Proj_AA |Proj_AB |Proj_AB | |----------|-----------|-----------|-----------| |Stu2 |Proj_AB |Proj_AA |Proj_AC | |----------|-----------|-----------|-----------| |Stu3 |Proj_AC |Proj_AC |Proj_AA | |----------|-----------|-----------|-----------| Now, the code that deals with the allocation currently stores a session in an object. The next time the allocation is run, the object is over-written. Thus what I'd really like to do is to store all the allocation results. This is important since I later need to derive from the data, information such as: which project Stu1 got assigned to the most or perhaps how popular Proj_AC was (how many times it was assigned / number of sessions). Question(s): What methods can I possibly use to basically store such session information persistently? Basically, each session output needs to add itself to the repository after ending and before beginning the next allocation cycle. One solution that was suggested by a friend was mapping these results to a relational database using SQLAlchemy. I kind of like the idea since this does give me an opportunity to delve into databases. Now the database structure I was recommended was: |----------|-----------|-----------| |Session |Student |Project | |----------|-----------|-----------| |1 |Stu1 |Proj_AA | |----------|-----------|-----------| |1 |Stu2 |Proj_AB | |----------|-----------|-----------| |1 |Stu3 |Proj_AC | |----------|-----------|-----------| |2 |Stu1 |Proj_AB | |----------|-----------|-----------| |2 |Stu2 |Proj_AA | |----------|-----------|-----------| |2 |Stu3 |Proj_AC | |----------|-----------|-----------| |3 |Stu1 |Proj_AB | |----------|-----------|-----------| |3 |Stu2 |Proj_AC | |----------|-----------|-----------| |3 |Stu3 |Proj_AA | |----------|-----------|-----------| Here, it was suggested that I make the Session and Student columns a composite key. That way I can access a specific record for a particular student for a particular session. Or I can merely get the entire allocation run for a particular session. Questions: Is the idea a good one? How does one implement and query a composite key using SQLAlchemy? What happens to the database if a particular student is not assigned a project (happens if all projects that he wants are taken)? In the code, if a student is not assigned a project, instead of a proj_id he simply gets None for that field/object. I apologise for asking multiple questions but since these are closely-related, I thought I'd ask them in the same space.

    Read the article

  • The session setup from the computer <computerName> failed to authenticate.

    - by TheCodeMonk
    Every once in a while, I get a client PC that won't be able to log into the domain. This morning it was telling us that the trust relationship between the pc and the domain failed. I checked the event logs on the primary domain controller and I see this for 2 PCs (the one that had the problem and one that can log in today). The session setup from the computer failed to authenticate. The name(s) of the account(s) referenced in the security database is . The following error occurred: Access is denied. I know how to fix this, by rejoining the PC to the domain... But why does this happen and how can I prevent it so I don't have to keep rejoining PCs to the domain?

    Read the article

  • nHibernate session - Using repository pattern in Web, windows, wcf etc...

    - by alex
    I recently posted a question which was answered by Bryan Watts, regarding generic repository for nHibernate. I'm trying to design my data access to allow various facets - from ASP.net, WCF and Windows Forms / Windows services. I'm a bit confused re: session management etc.. How would I handle this? I've been checking out code such as: http://membranecms.googlecode.com/svn/ and questions such as: http://stackoverflow.com/questions/1207833/nhibernate-linq-session-management But what do i do if i don't just do things in a web based environment..? Do i need to create different repositories for each client? Or do i pass in the ISession into the (for example) UserRepository constructor..? ... as a side note I'm using nHibernate.Linq Also using fluent nHibernate to config my mapping

    Read the article

  • JSESSIONID collision between two servers on same ip but different ports

    - by Steve Armstrong
    I've got a situation where I have two different webapps running on a single server, using different ports. They're both running Java's Jetty servlet container, so they both use a cookie parameter named JSESSIONID to track the session id. These two webapps are fighting over the session id. Open a Firefox tab, and go to WebApp1 WebApp1's HTTP response has a set-cookie header with JSESSIONID=1 Firefox now has a Cookie header with JSESSIONID=1 in all it's HTTP requests to WebApp1 Open a second Firefox tab, and go to WebApp2 The HTTP reqeust to WebApp2 also has a Cookie header with JSESSIONID=1, but in the doGet, when I call req.getSession(false); I get null. And if I call req.getSession(true) I get a new Session object, but then the HTTP response from WebApp2 has a set-cookie header with JSESSIONID=20 Now, WebApp2 has a working Session, but WebApp1's session is gone. Going to WebApp1 will give me a new session, blowing away WebApp2's session. Continue forever So the Sessions are thrashing between each web app. I'd really like for the req.getSession(false) to return a valid session if there's already a JSESSIONID cookie defined. One option is to basically reimplement the Session framework with a HashMap and cookies called WEBAPP1SESSIONID and WEBAPP2SESSIONID, but that sucks, and means I'll have to hack the new Session stuff into ActionServlet and a few other places. This must be a problem others have encountered. Is Jetty's HttpServletRequest.getSession(boolean) just crappy?

    Read the article

  • I can't do a Remote Assistance session to a Windows XP box from Windows 7.

    - by superkinhluan
    My Mom's computer is running Windows XP, and my desktop running Windows 7. She's having some technical issue, so I want to do a Remote Assistance session to her machine. However, no matter what I've tried, the Remote Assistance program doesn't connect successfully. I've verified that the Windows Firewall (on both my and her machines) is configured properly to allow Remote Assistance program to go through. What's interesting is that I have the same problem when I try to do Remote Assistance from my desktop to my laptop, which is also running Windows XP. However, when I try to connect to my girlfriend's machine, which is runninng Windows 7 this time, the connection is successful. So in the end, I guess there must be some incompability between Windows 7 and Windows XP. Does anyone experience the same issue? How did you resolve it?

    Read the article

< Previous Page | 145 146 147 148 149 150 151 152 153 154 155 156  | Next Page >