Search Results

Search found 14464 results on 579 pages for 'del icio us'.

Page 169/579 | < Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >

  • How to motivate as a project leader?

    - by Zenzen
    Ok so some of you might think this is not stackoverflow related, but since I got a similar question during an interview (for a J2EE dev position) I think you guys will help me in the end. The situation is simple: you're working on a project in a small group (4-5 people), you're the project leader and the guy who's the most competent (technology wise) is also slacking the most. What do you do to motivate him? The problem is, I'm having the same issue at the university - this semester we have a lot of projects going on (6 to be exact) so we decided to dived the work so everyone will do 1-2 projects in a technology he knows/wants to learn. It's been working well for the most part, the problem is the person whom we thought would finish his project way before us as he's the most experienced among us. How am I supposed to make him do his share properly? Till now he was just half assing his part by doing the bare minimum, with which the professor wasn't really pleased to say the least... Now it's only a university project, but in the future I might have the same problem in a real job and losing a really smart and experienced worker (as firing him is the only solution I can come up with now) would really be a waste. Aren't there any better ways? p.s. now that I think about it, are there any books that would help me in becoming a better project manager?

    Read the article

  • Explorer.exe keeps crashing during log in

    - by asif
    I have got a weird problem. My windows 7 has two user accounts (both are administrator). I can log in to one account and do all sort of work. But whenever I try to log in to other account, it shows a blank screen and a messagebox pops up with "windows explorer has stopped working". The options available are: Close the program Check online for a solution and close the program The problem signature is as follows: Problem Event Name: InPageError Error Status Code: c000009c Faulting Media Type: 00000003 OS Version: 6.1.7601.2.1.0.256.1 Locale ID: 1033 Additional Information 1: 0a9e Additional Information 2: 0a9e372d3b4ad19135b953a78882e789 Additional Information 3: 0a9e Additional Information 4: 0a9e372d3b4ad19135b953a78882e789 If I press alt+ctrl+del and then select start task manager, it also crashes. I can not run any program using runas command (from good profile) too. The task manager and runas programs all show same problem signature. I read the similar question and followed all the steps, but no luck. Later, I viewed the event log and found that, explorer.exe could not access a file. I checked the location but the file is there. The actual message is: Windows cannot access the file C:\Users\testuser\AppData\Local\Microsoft\Windows\Caches\{AFBF9F1A-8EE8-4C77-AF34-C647E37CA0D9}.1.ver0x0000000000000020.db for one of the following reasons: there is a problem with the network connection, the disk that the file is stored on, or the storage drivers installed on this computer; or the disk is missing. Windows closed the program Windows Explorer because of this error. The question is, how can I resolve this issue? Should I just delete the file or replace it with another one to stop explorer.exe from crashing? offtopic: What is the content of this file and why it is necessary?

    Read the article

  • TimeZone change to UTC while updating the Appointment

    - by Firoz Ansari
    I am using EWS 1.2 to send appointments. On creating new Appointments, TimeZone is showing properly on notification mail, but on updating the same appointment, it's TimeZone reset to UTC. Could anyone help me to fix this issue? Here is sample code to replicate the issue: ExchangeService service = new ExchangeService(ExchangeVersion.Exchange2010_SP1, TimeZoneInfo.FindSystemTimeZoneById("Eastern Standard Time")); service.Credentials = new WebCredentials("ews_calendar", PASSWORD, "acme"); service.Url = new Uri("https://acme.com/EWS/Exchange.asmx"); Appointment newAppointment = new Appointment(service); newAppointment.Subject = "Test Subject"; newAppointment.Body = "Test Body"; newAppointment.Start = new DateTime(2012, 03, 27, 17, 00, 0); newAppointment.End = newAppointment.Start.AddMinutes(30); newAppointment.RequiredAttendees.Add("[email protected]"); //Attendees get notification mail for this appointment using (UTC-05:00) Eastern Time (US & Canada) timezone //Here is the notification content received by attendees: //When: Tuesday, March 27, 2012 5:00 PM-5:30 PM. (UTC-05:00) Eastern Time (US & Canada) newAppointment.Save(SendInvitationsMode.SendToAllAndSaveCopy); // Pull existing appointment string itemId = newAppointment.Id.ToString(); Appointment existingAppointment = Appointment.Bind(service, new ItemId(itemId)); //Attendees get notification mail for this appointment using UTC timezone //Here is the notification content received by attendees: //When: Tuesday, March 27, 2012 11:00 PM-11:30 PM. UTC existingAppointment.Update(ConflictResolutionMode.AlwaysOverwrite, SendInvitationsOrCancellationsMode.SendToAllAndSaveCopy);

    Read the article

  • sql-server performance optimization by removing print statements

    - by AG
    We're going through a round of sql-server stored procedure optimizations. The one recommendation we've found that clearly applies for us is 'SET NOCOUNT ON' at the top of each procedure. (Yes, I've seen the posts that point out issues with this depending on what client objects you run the stored procedures from but these are not issues for us.) So now I'm just trying to add in a bit of common sense. If the benefit of SET NOCOUNT ON is simply to reduce network traffic by some small amount every time, wouldn't it also make sense to turn off all the PRINT statements we have in the stored procedures that we only use for debugging? I can't see how it can hurt performance. OTOH, it's a bit of a hassle to implement due to the fact that some of the print statements are the only thing within else clauses, so you can't just always comment out the one line and be done. The change carries some amount of risk so I don't want to do it if it isn't going to actually help. But I don't see eliminating print statements mentioned anywhere in articles on optimization. Is that because it is so obvious no one bothers to mention it?

    Read the article

  • retrieving object information with Doctrine

    - by ajsie
    i want to fetch information from the database using objects. i really like this approach cause this is more OOP: $user = Doctrine_Core::getTable('User')->find(1); echo $user->Email['address']; echo $user->Phonenumbers[0]->phonenumber; rather than: $q = Doctrine_Query::create() ->from('User u') ->leftJoin('u.Email e') ->leftJoin('u.Phonenumbers p') ->where('u.id = ?', 1); $user = $q->fetchOne(); echo $user->Email['address']; echo $user->Phonenumbers[0]['phonenumber']; the problem is that the first one uses 3 queries (3 different tables), while the second one uses only 1 (and is therefore recommended technique). but i feel that it destroys the object oriented design. cause ORM is meant to give us an OOP approach so that we could focus on objects and not the relational database. but now they want us to go back to use SQL like pattern. there isn't a way to get information form multiple tables not using DQL? the above examples are taken from the documentation: doctrine

    Read the article

  • Posting forms to a 404 + HttpHandler in IIS7: why has all POST data gone missing?

    - by Rahul
    OK, this might sound a bit confusing and complicated, so bear with me. We've written a framework that allows us to define friendly URLs. If you surf to any arbitrary URL, IIS tries to display a 404 error (or, in some cases, 403;14 or 405). However, IIS is set up so that anything directed to those specific errors is sent to an .aspx file. This allows us to implement an HttpHandler to handle the request and do stuff, which involves finding the an associated template and then executing whatever's associated with it. Now, this all works in IIS 5 and 6 and, to an extent, on IIS7 - but for one catch, which happens when you post a form. See, when you post a form to a non-existent URL, IIS says "ah, but that url doesn't exist" and throws a 405 "method not allowed" error. Since we're telling IIS to redirect those errors to our .aspx page and therefore handling it with our HttpHandler, this normally isn't a problem. But as of IIS7, all POST information has gone missing after being redirected to the 405. And so you can no longer do the most trivial of things involving forms. To solve this we've tried using a HttpModule, which preserves POST data but appears to not have an initialized Session at the right time (when it's needed). We also tried using a HttpModule for all requests, not just the missing requests that hit 404/403;14/405, but that means stuff like images, css, js etc are being handled by .NET code, which is terribly inefficient. Which brings me to the actual question: has anyone ever encountered this, and does anyone have any advice or know what to do to get things working again? So far someone has suggested using Microsoft's own URL Rewriting module. Would this help solve our problem? Thanks.

    Read the article

  • Fabric "TypeError: not all arguments converted during string formatting"

    - by Brian Carpio
    I have the following fabric task: @task def deploy_west_ec2_ami(name, puppetClass, size='m1.small', region='us-west-1', basedn='joe', ldap='arch-ldap-01', secret='secret', subnet='subnet-d43b8ab d', sgroup='sg-926578fe'): execute(deploy_ec2_ami, name='%s',puppetClass='%s',size='%s',region='%s',basedn='%s',ldap='%s',secret='%s',subnet='%s',sgroup='%s' %(name, puppetClass , size, region, basedn, ldap, secret, subnet, sgroup)) However when I run the command: fab deploy_west_ec2_ami:test,java I get the following Traceback: Traceback (most recent call last): File "/usr/local/lib/python2.6/dist-packages/fabric/main.py", line 710, in main *args, **kwargs File "/usr/local/lib/python2.6/dist-packages/fabric/tasks.py", line 321, in execute results['<local-only>'] = task.run(*args, **new_kwargs) File "/usr/local/lib/python2.6/dist-packages/fabric/tasks.py", line 113, in run return self.wrapped(*args, **kwargs) File "/home/bcarpio/Projects/githubenterprise/awsdeploy/fabfile.py", line 35, in deploy_west_ec2_ami execute(deploy_ec2_ami, name='%s',puppetClass='%s',size='%s',region='%s',basedn='%s',ldap='%s',secret='%s',subnet='%s',sgroup='%s' %(name, puppetClass, size, region, basedn, ldap, secret, subnet, sgroup)) TypeError: not all arguments converted during string formatting I am not sure I understand why. I am pretty sure I have all the values defined here just fine. Also when I run the execute task deploy_ec2_ami as so: deploy_ec2_ami:test,java,m1.small,us-west-1,'dc\=test\,dc\=net',ldap-01,secret,subnet-d43b8abd,sg-926578fe It works just fine

    Read the article

  • Attempting to Convert Byte[] into Image... but is there platform issues involved

    - by user305535
    Greetings, Current, I'm attempting to develop an application that takes a Byte Array that is streamed to us from a Linux C language program across a TCPClient (stream) and reassemble it back into an image/jpg. The "sending" application was developed by a off-site developer who claims that the image reassembles back into an image without any problems or errors in his test environment (all Linux)... However, we are not so fortunate. I (believe) we successfully get all of the data sent, storing it as a string (lets us append the stream until it is complete) and then we convert it back into a Byte[]. This appears to be working fine... But, when we take the byte[] we get from the streaming (and our string assembly) and try to convert it into an image using the System.Drawing.Image.FromStream() we get errors.... Anyone have any idea what we're doing wrong? Or, does anyone know if this is a cross-platform issue? We're developing our app for Windows XP and C# .net, but the off-site developer did his work in c and Linux... perhaps there's some difference as to how each Operating System Coverts Images into Byte Arrays? Anyway, here's the code for converting our received ByteArray (from the TCPClient Stream) into an image. This code works when we send an image from a test machine we built that RUNS on XP, but not from the Linux box... System.Text.ASCIIEncoding encoding = new System.Text.ASCIIEncoding(); byte[] imageBytes = encoding.GetBytes(data); MemoryStream ms = new MemoryStream(imageBytes, 0, imageBytes.Length); // Convert byte[] to Image ms.Write(imageBytes, 0, imageBytes.Length); System.Drawing.Image image = System.Drawing.Image.FromStream(ms, false); <-- DIES here, throws a {System.ArgumentException: Parameter is not valid.} error Any advice, suggestions, theories, or HELP would be GREATLY appreciated! Please let me know??? Best wishes all! Thanks in advance! Greg

    Read the article

  • Licensing iPhone apps per user in existing system

    - by Alxandr
    I've been asked by my job to write a iPhone app for an existing system for managing worktasks. This system is proprietary and costs money, so in order to login you need to be a customer. Now, I've got two questions about the legality of licensing iPhone apps with this system: My company would like to be able to sell the app for profit, but not as a one-time payment, but as a added subscription-fee to the already existing one. Is it legal for us (according with the terms of distributing an iPhone app on the Apple App Store) to do this? That way we'll just add another field to the users-database saying weather or not iPhone is enabled for them, and distribute the app as a free app on App Store. If the previous question is not legal, we'd like to just create a free app and distribute it as part of the existing system. In other words, no extra fee for using the iPhone app for the users, but still free distribution trough App Store. Due to our company not being american or having an office in the U.S. at all enterprice account is not an option. Please let me know if there is anything wrong with any of the above approaches.

    Read the article

  • Using memory-based cache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • how could application installations/configurations be easier in linux? [closed]

    - by ajsie
    although you can do anything in linux it tends to require a lot of tweaking in config files and reading a lot of manuals/tutorials before you can have it running in your way. i know that it gets a lot easier by time, and the apt-get installations with ubuntu/debian is heading the right way. but how can linux be more userfriendly for us in the future? i thought that if more is automated like an IDE environment, eg. typing svn will give us all the commands and description about each command when you move between commands with your keyboard. that would be great. but that's just one example. another is the navigation in the terminal between folders. now you have to type a lot just to jump from/to different folders. would be great with some more automatization here too. i know that these extra features will slow down the server, but its 2010 now, and these features are not that heavy for the cpu, but makes it more userfriendly and encourage maintainance of a server, not frighten u off. what do you think about this? should/could we have more user friendly linux environment in servers, something that has annoyed you a lot? a lot of things are done in the unix way, but maybe we should reinvent the wheel in some areas, cause apparently, its so...repeatingly today and difficult to do easy tasks. it should be easier i think..

    Read the article

  • Why wouldn't the default Control Adapter mappings work on Chrome or Safari?

    - by Deane
    I have confirmed that my Control Adapters are not triggering in Chrome and Safari. I've debugged, and the breakpoints inside the adapters just don't get hit in Chrome/Safari, when they work perfectly find in Firefox/IE. So, for Chrome/Safari, IIS is just ignoring the mapping. My AdapterMappings.browser file looks like this: <browsers> <browser refID="Default"> <controlAdapters> [...adapters here....] </controlAdapters> </browser> </browsers> This should provide mappings for all browsers, correct? I used the Charles proxy to check what user agents were being sent. They are: Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.1.249.1064 Safari/532.5 Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/531.22.7 (KHTML, like Gecko) Version/4.0.5 Safari/531.22.7 Any idea why this would be? Everything I've read tells me that my browser mappings are correct? And, as I said this works for IE/Firefox, so I know my configuration is technically correct.

    Read the article

  • [Javascript] Linux Ajax (mootools Request.JSON) Header error

    - by VDVLeon
    Hi all, I use the following code to get some json data: var request = new Request.JSON( { 'url': sourceURI, 'onSuccess': onPageData } ); request.get(); Request.JSON is a class from Mootools (a javascript library). But on linux (ubuntu on firefox 3.5 and Chrome) the request always fails. So i tried to display the http request ajax is sending. (I used netcat to display it) The request is like this: OPTIONS /the+url HTTP/1.1 Host: example.com Connection: keep-alive User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US) AppleWebKit/532.3 (KHTML, like Gecko) Chrome/4.0.226.0 Safari/532.3 Referer: http://example.com/ref... Access-Control-Request-Method: GET Origin: http://example.com Access-Control-Request-Headers: X-Request, X-Requested-With, Accept Accept: */* Accept-Encoding: gzip,deflate Accept-Language: en-US,en;q=0.8 Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.3 The HTTP request (first line) is not how it should be: OPTIONS /the+url HTTP/1.1 It should be: GET /the+url HTTP/1.1 Does anybody know why this problem is and how to fix it?

    Read the article

  • DotNetNuke and Subversion guidelines

    - by David Stratton
    I've Googled, Binged, and here at StackOverflow, looked through the related questions and searched, but I'm not finding what I'm looking for. I've also searched documentation on DNN. What I'm looking for is any guidance (tutorials, blogs, step-by-step instructions for setting up a repository) etc from people who are experienced in using DotNetNuke with SVN. We use SVN for all our source control, and have no problem with standard applications, because we pretty much built the repository and directory structure to work with our processes. This means when we do web sites, in Visual Studio, we do file based web sites, rather than setting them up in the local IIS. It just makes things easier for us. However, with DNN, it appears that even if you get the source code, it is expecting to be set up in the local IIS, which means additional headaches for us. For example, we are moving all of our source code off our local C drives, and onto a shared drive on a server. This is to enable backups in addition to our normal source control. (This was a management decision). So that means that we need to change the virtual web app when we make the move. Has anyone come up with a good way to work around this? Can DNN be set up so that the developer web server in Visual Studio can be used, so that we can treat it just like any normal web app? Am I missing something obvious? Edit - added I'm willing to accept answers like "We tried it and never got it to work", and "It can't be done" as answers. I'm always open to hearing "It can't be done the way you want. You need to change your procedures to match how it works" if necessary. I guess if you've got experience trying this and just couldn't get it to work, I can learn from your experience that way as well, but some detail would be good.

    Read the article

  • why altgr+p doesn't work, AUTOHOTKEY

    - by voodoomsr
    Hi guys, i try and i try to find the bug in this script but i can't . Maybe some of you can give me a hint... Problem. When i press altgr&p it suppose that the Delete key is triggered, but the weird thing is that after one succesfull delete, if i continue pressing altgr&p appears the p, and the delete isn't triggered anymore. in the meantime i test other solution move to the right and then delete with the backspace, that works, but when i have text selected this alternative isn't good.... here is the code #InstallKeybdHook ;characters very used RAlt & e:: SendInput []{Left} Return RAlt & w:: SendInput <>{Left} Return RAlt & d:: SendInput (){Left} Return RAlt & s:: SendRaw {} SendInput {Left} Return RAlt & x:: SendInput ""{Left} Return RAlt & c:: SendInput ''{Left} Return RAlt & f:: SendInput * Return RAlt & r:: SendRaw + Return RAlt & v:: SendInput - Return ;comienzo y fin de linea RAlt & a:: SendInput {Home} Return RAlt & z:: SendInput {End} Return ;movimientos InEditon /* RAlt & p:: SendInput {Right}{BackSpace} Return */ <^>!p:: Send {Del} Return RAlt & o:: SendInput {Up} Return RAlt & l:: SendInput {Down} Return RAlt & k:: SendInput {Left} Return RAlt & ñ:: SendInput {Right} Return RAlt & ,:: SendInput {Enter} Return RAlt & i:: SendInput {BackSpace} Return ;; clipx ^mbutton:: sendinput ^+{insert} Return ^+k::^+Left +k::+Left ^k::Left +l::+Down ^+l::^+Down ^l::^Down +ñ::+Right ^+ñ::^+Right ^ñ::^Right +o::+Up ^+o::^+Up ^o::^Up +a::+Home ^+a::^+Home +z::+End ^+z::^+End

    Read the article

  • properly format postal address with line breaks [google maps]

    - by munchybunch
    Using V3 of the google maps API, is there any reliable way to format addresses with the line break? By this, I mean something like 1600 Amphitheatre Parkway Mountain View, CA 94043 should be formatted as 1600 Amphitheatre Parkway Mountain View, CA 94043 Looking through the response object from geocoding, there is an address_components array that has, for the above address, 8 components (not all of the components are used for the address): 0: Object long_name: "1600" short_name: "1600" types: Array[1] 0: "street_number" length: 1 1: Object long_name: "Amphitheatre Pkwy" short_name: "Amphitheatre Pkwy" types: Array[1] 0: "route" length: 1 2: Object long_name: "Mountain View" short_name: "Mountain View" types: Array[2] 0: "locality" 1: "political" length: 2 3: Object long_name: "San Jose" short_name: "San Jose" types: Array[2] 0: "administrative_area_level_3" 1: "political" length: 2 4: Object long_name: "Santa Clara" short_name: "Santa Clara" types: Array[2] 0: "administrative_area_level_2" 1: "political" length: 2 5: Object long_name: "California" short_name: "CA" types: Array[2] 0: "administrative_area_level_1" 1: "political" length: 2 6: Object long_name: "United States" short_name: "US" types: Array[2] 0: "country" 1: "political" length: 2 7: Object long_name: "94043" short_name: "94043" types: Array[1] 0: "postal_code" length: 1 I was thinking that you could just combine parts that you want, like sprintf("%s %s<br />%s, %s %s", array[0].short_name, array[1].short_name, array[2].short_name, array[5].short_name, array[7].short_name) [edit]I just realized that sprintf isn't defined by default in JavaScript, so just a concatenation would do I guess.[/edit] But that seems awfully unreliable. Does anyone know the details on the structure of address_components, and if it's reliably similar like that for street addresses in the US? If I wanted to, I guess I could look for the proper types (street_number,route, etc) as well. I'd love it if anyone had a better way than what I"m doing here...

    Read the article

  • Do you use Styrofoam balls to model your systems?

    - by Nick D
    [Objective-C] Do you still use Styrofoam balls to model your systems, where each ball represents a class? Tom Love: We do, actually. We've also done a 3D animation version of it, which we found to be nowhere near as useful as the Styrofoam balls. There's something about a physical, conspicuous structure hanging from the ceiling right in the middle of a development project that's regularly updated to provide not only the structure of the system that you're building, but also the current status of each one of the classes. We've done it on 19 projects the last time I've counted. One of them was 1,856 classes, which is big - actually, probably bigger than it should be. It was a big commercial project, so it needed to be somewhat big. Masterminds of Programming It is the first time I've read or heard about using styrofoam balls to model classes. Is that a commonly used technique? And, how does that sort of modeling help us to design better the system? If you have any photos to share which can show us how the classes are represented it'd be great!

    Read the article

  • Timeout error occurred trying to start MySQL Daemon. CentOS 5

    - by epema
    I ran into troubles with MySQL on my CentOS. I had some problems and backed up my database and removed mysql with all dependencies. After that I ran reinstalled: yum groupinstall "MySQL Database" Installed without errors. Running the mysql daemon: service mysqld start Timeout error occurred trying to start MySQL Daemon. Starting MySQL: [FAILED] I also ran # /usr/bin/mysql_install_db --user=mysql Installing MySQL system tables... 120112 1:49:44 [ERROR] Error message file '/usr/share/mysql/english/errmsg.sys' had only 480 error messages, but it should contain at least 481 error messages. Check that the above file is the right version for this program! 120112 1:49:44 [ERROR] Aborting Installation of system tables failed! Examine the logs in /var/lib/mysql for more information. You can try to start the mysqld daemon with: /usr/libexec/mysqld --skip-grant & and use the command line tool /usr/bin/mysql to connect to the mysql database and look at the grant tables: shell> /usr/bin/mysql -u root mysql mysql> show tables Try 'mysqld --help' if you have problems with paths. Using --log gives you a log in /var/lib/mysql that may be helpful. The latest information about MySQL is available on the web at http://www.mysql.com Please consult the MySQL manual section: 'Problems running mysql_install_db', and the manual section that describes problems on your OS. Another information source is the MySQL email archive. Please check all of the above before mailing us! And if you do mail us, you MUST use the /usr/bin/mysqlbug script! Checking the logs: less /var/log/mysqld.log Log file is empty. I don't even know how to debug it and not sure what to do. Any recommendations? Thank you

    Read the article

  • Why Illegal cookies are send by Browser and received by web servers (rfc2109)?

    - by Artyom
    Hello, According to RFC 2109 cookie's value can be either HTTP token or quoted string, and token can't include non-ASCII characters. Cookie's RFC 2109: http://tools.ietf.org/html/rfc2109#page-3 HTTP's RFC 2068 token definition: http://tools.ietf.org/html/rfc2068#page-16 However I had found that Firefox browser (3.0.6) sends cookies with utf-8 string as-is and three web servers I tested (apache2, lighttpd, nginx) pass this string as-is to the application. For example, raw request from browser: $ nc -l -p 8080 GET /hello HTTP/1.1 Host: localhost:8080 User-Agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.0.9) Gecko/2009050519 Firefox/2.0.0.13 (Debian-3.0.6-1) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-us,en;q=0.5 Accept-Encoding: gzip,deflate Accept-Charset: windows-1255,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: wikipp=1234; wikipp_username=?????? Cache-Control: max-age=0 And raw response of apache, nginx and lighttpd HTTP_COOKIE CGI variable: wikipp=1234; wikipp_username=?????? What do I miss? Can somebody explain me?

    Read the article

  • Copying a foreign Subversion repository to keep under dependencies

    - by Jonathan Sternberg
    I want to keep dependencies for my project in our own repository, that way we have consistent libraries for the entire team to work with. For example, I want our project to use the Boost libraries. I've seen this done in the past with putting dependencies under a "vendor" or "dependencies" folder. But I still want to be able to update these dependencies. If a new feature appears in a library and we need it, I want to just be able to update that repository within our own repository. I don't want to have to recopy it and put it under version control again. I'd also like for us to have the ability to change dependencies if a small change is needed without stopping us from ever updating the library. I want the ability to do something like 'svn cp', then be able to 'svn merge' in the future. I just tried this with the boost trunk, but I'm not able to get any history using 'svn log' on the copy I made. How do I do this? What is usually done for large projects with dependencies?

    Read the article

  • TC hashing filters - single rule deletion

    - by exa
    For traffic shaping I'm currently using a setup that looks exactly like the setup from LARTC, on this page: http://lartc.org/howto/lartc.adv-filter.hashing.html I have a simple problem with that - everytime I want to modify something in the hash table (like assign a IP to different flowid), I need to delete the whole filter table and add it again filter by filter. (I actually don't do it by hand, I have a nice program that does it for me... but still...) There is a problem - I got roughly 10k filters allocated this way and deleting and refilling the whole filtertable can get pretty lengthy, which is not exactly good for traffic shaping. My program could easily manage to delete only the rules that need to be deleted (thus reducing the whole problem to several commands and miliseconds), but I simply don't know the command that deletes only the one hashing rule. My tc filter show: filter parent 1: protocol ip pref 1 u32 filter parent 1: protocol ip pref 1 u32 fh 2: ht divisor 256 filter parent 1: protocol ip pref 1 u32 fh 2:a:800 order 2048 key ht 2 bkt a flowid 1:101 match 0a0a0a0a/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 2:c:800 order 2048 key ht 2 bkt c flowid 1:102 match 0a0a0a0c/ffffffff at 16 filter parent 1: protocol ip pref 1 u32 fh 800: ht divisor 1 filter parent 1: protocol ip pref 1 u32 fh 800::800 order 2048 key ht 800 bkt 0 link 2: match 00000000/00000000 at 16 hash mask 000000ff at 16 The wish: 'tc filter del ...' command that removes only one specific filter (for example the 0a0a0a0a IP match (IP address 10.10.10.10)). Removal of some small subgroup would also be good - for example I could still recreate a bucket (bkt a) pretty fast. My attempts: I tried to number all the filters using prio, but with no help -- they just create something unusuable (but deletable) below, but the bucketed filters remain there after that gets deleted. Any ideas? edit - I'm adding a simplified tl;dr description of the problem: I created hash filter on some interfce just like in this http://lartc.org/howto/lartc.adv-filter.hashing.html I want to find a command that deletes one rule (e.g. 1.2.1.123) from the table, leaving the rest untouched and working.

    Read the article

  • How to kill tasks in Windows 7 when even Task Manager won't open or respond?

    - by endolith
    Occasionally one of my computers will get so bogged down that everything locks up, Ctrl+Alt+Del doesn't work, Task Manager won't open, or they work, but are opening so slowly that it will take hours or days to shut down other processes and regain control of the computer, etc. Is there a way to, for instance, force Task Manager to be highest priority so it always opens immediately with Ctrl+Shift+Esc even when some other process/driver is hogging the CPU? Is there some other program that can run in the background and open immediately like this? This question isn't about fixing "underlying problems". No matter how much memory you have, it's still possible for a rogue process to eat it all up and lock up the computer in page fault thrashing, hog the CPU, etc. This question is about how to take back control of the computer when that happens. Basically when these kind of lock-ups happen, I want to open some kind of task manager that pauses every other process and allows me to kill one of them, and then let everything resume so I can save my work, etc. Otherwise my only option is to hold down the power button. Antifreeze is supposed to do exactly what i want, pausing all other applications and starting a task manager to kill the offender, but in my testing, it actually does neither.

    Read the article

  • How to get associated URLRequest from Event.COMPLETE fired by URLLoader

    - by matt lohkamp
    So let's say we want to load some XML - var xmlURL:String = 'content.xml'; var xmlURLRequest:URLRequest = new URLRequest(xmlURL); var xmlURLLoader:URLLoader = new URLLoader(xmlURLRequest); xmlURLLoader.addEventListener(Event.COMPLETE, function(e:Event):void{ trace('loaded',xmlURL); trace(XML(e.target.data)); }); If we need to know the source URL for that particular XML doc, we've got that variable to tell us, right? Now let's imagine that the xmlURL variable isn't around to help us - maybe we want to load 3 XML docs, named in sequence, and we want to use throwaway variables inside of a for-loop: for(var i:uint = 3; i > 0; i--){ var xmlURLLoader:URLLoader = new URLLoader(new URLRequest('content'+i+'.xml')); xmlURLLoader.addEventListener(Event.COMPLETE, function(e:Event):void{ trace(e.target.src); // I wish this worked... trace(XML(e.target.data)); }); } Suddenly it's not so easy, right? I hate that you can't just say e.target.src or whatever - is there a good way to associate URLLoaders with the URL they loaded data from? Am I missing something? It feels unintuitive to me.

    Read the article

  • WPF - data binding trigger before content changed

    - by 0xDEAD BEEF
    How do i create trigger, which fires BEFORE binding changes value? How to do this for datatemplate? <ContentControl Content="{Binding Path=ActiveView}" Margin="0,95,0,0"> <ContentControl.Triggers> <--some triger to fire, when ActiveView is changing or has changed ?!?!? --> </ContentControl.Triggers> public Object ActiveView { get { return m_ActiveView; } set { if (PropertyChanging != null) PropertyChanging(this, new PropertyChangingEventArgs("ActiveView")); m_ActiveView = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("ActiveView")); } } How to do this for DataTemplate? <DataTemplate DataType="{x:Type us:LOLClass1}"> <ContentControl> <ContentControl.RenderTransform> <ScaleTransform x:Name="shrinker" CenterX="0.0" CenterY="0.0" ScaleX="1.0" ScaleY="1.0"/> </ContentControl.RenderTransform> <us:UserControl1/> </ContentControl> <DataTemplate.Triggers> <-- SOME TRIGER BEFORE CONTENT CHANGES--> <BeginStoryboard> <Storyboard> <DoubleAnimation Storyboard.TargetName="shrinker" Storyboard.TargetProperty="ScaleX" From="1.0" To="0.8" Duration="0:0:0.3"/> <DoubleAnimation Storyboard.TargetName="shrinker" Storyboard.TargetProperty="ScaleY" From="1.0" To="0.8" Duration="0:0:0.3"/> </Storyboard> </BeginStoryboard> </-- SOME TRIGER BEFORE CONTENT CHANGES--> </DataTemplate.Triggers> </DataTemplate> How to get notification BEFORE binding is changed? (i want to capture changing Visual component to bitmap and create sliding view animation)

    Read the article

  • POST XML to server, receive PDF

    - by Shaggy Frog
    Similar to this question, we are developing a Web app where the client clicks a button to receive a PDF from the server. Right now we're using the .ajax() method with jQuery to POST the data the backend needs to generate the PDF (we're sending XML) when the button is pressed, and then the backend is generating the PDF entirely in-memory and sending it back as application/pdf in the HTTP response. One answer to that question requires the server-side save the PDF to disk so it can give back a URL for the client to GET. But I don't want the backend caching content at all. The other answer suggests the use of a jQuery plugin, but when you look at its code, it's actually generating a form element and then submitting the form. That method will not work for us since we are sending XML data in the body of the HTTP request. Is there a way to have the browser open up the PDF without caching the PDF server-side, and without requiring us to throw out our send-data-to-the-server-using-XML solution? (I'd like the browser to behave like it does when a form element is submitted -- a POST is made and then the browser looks at the Content-type header to determine what to do next, like load the PDF in the browser window, a la Safari)

    Read the article

< Previous Page | 165 166 167 168 169 170 171 172 173 174 175 176  | Next Page >