Search Results

Search found 23782 results on 952 pages for 'claims based authorizatio'.

Page 386/952 | < Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >

  • How do you fix loading plugins in eclipse 3.5.1 on linux?

    - by Jay R.
    I have two linux boxes. Both Fedora 11 x64. On one, I downloaded the eclipse-java-galileo-SR1-linux-gtk-x86_64.tar.gz. I unpacked it to /opt/eclipse-3.5.1/ and used the Install New Software... item to install the SVN team provider and the Polarion SVN connectors. Everything works. On the second, I copied the tar.gar for eclipse there, and then tried to follow the same steps. When I get to the install SVN team provided, eclipse downloads it and claims to install it and asks to restart. I restart and there is no SVN support. The software installer knows its there because I can't reinstall it without uninstalling it. So the questions: Why isn't the plugin/feature loading for the SVN Team Support? Is there a checkbox that I forgot about that enables the plugin? Is there a command line option that will force reload all of the features on the disk? I've tried to install other things like findbugs, but I get the same result. I have no messages in the log file indicating an exception or anything like that.

    Read the article

  • Rails can't find my route but it exists!

    - by DJTripleThreat
    Ok I have events that I want to publish/unpublish with an extra action (nonRESTful) I watched Ryan Bates' railscast on this: http://railscasts.com/episodes/35-custom-rest-actions and it got me most of the way. I think the problem is that my route is nested in an /admin section so even though when I run rake routes and get: publish_admin_event PUT /admin/events/:id/publish(.:format) {:controller=>"event_services", :action=>"publish"} This won't work in my /views/admin/index.html.erb file: <%= link_to 'Publish', publish_admin_event(event), :method => :put %> because it claims that path doesn't exist! And neither will this: <%= link_to 'Publish', {:controller => :event_services, :action => :publish}, {:method => :put, :id => event} %> and says that "No route matches {:controller=>"event_services", :action=>"publish"}" so what gives? (And I've tried restarting my server so that isn't it.) EDIT: This DOES work: <%= link_to 'Publish', "/admin/events/" + event.id.to_s + "/publish", :method => :put %> But I'd rather NOT do this.

    Read the article

  • How to see external libraries code when debugging

    - by Sanva
    Hello!! First of all... this is my message #1 in this place, so... please be nice with me ;) I just started recently to study Gnome apps/libraries and I found that debuggers are an excellent way to learn, because seeing the code running helps a lot in understanding the structure of the program. But I have a problem. For example, debugging gnome-panel I found a lot of calls to external functions (basically the GTK+ functions), and although pretending to see all the code of all the functions applications like this call would be crazy, there are a lot that will be very interesting to see in action. The problem is that the debugger hasn't the code of those libraries loaded and it can't show it to me —at most it shows the line number where the execution is. I'm using Nemiver and when it tries to enter in an external function it claims because it can't find a file it supposed to be somewhere. For example, trying to enter in gtk_window_set_default_icon_name it tries to load /build/buildd/gtk+2.0-2.16.1/gtk/gtkwindow.c, and calling XSetIOErrorHandler, ../../src/ErrHndlr.c. So now I think that I'm doing something wrong... Why Nevimer are looking for those source files in those places?? My system does not even have the /build/buildd/ folders... and I don't know if I'm doing something wrong or I need to install somethig or what. Any suggestion? How do you debug this kind of applications? Best regards and thanks a lot for your time —and forgive me if my English is bad.

    Read the article

  • Can isdigit legitimately be locale dependent in C

    - by cdev
    In the section covering setlocale, the ANSI C standard states in a footnote that the only ctype.h functions whose behaviour is not affected by the current locale are isdigit and isxdigit. The Microsoft implementation of isdigit is locale dependent because, for example, in locales using code page 1250 isdigit only returns non-zero for characters in the range 0x30 ('0') - 0x39 ('9'), whereas in locales using code page 1252 isdigit also returns non-zero for the superscript digits 0xB2 ('²'), 0xB3 ('³') and 0xB9 ('¹'). Is Microsoft in violation of the C standard by making isdigit locale dependent? In this question I am primarily interested in C90, which Microsoft claims to conform to, rather than C99. Additional background: Microsoft's own documentation of setlocale incorrectly states that isdigit is unaffected by the LC_CTYPE part of the locale. The section of the C standard that covers the ctype.h functions contains some wording that I consider ambiguous: "The behavior of these functions is affected by the current locale. Those functions that have locale-specific aspects only when not in the "C" locale are noted below." I consider this ambiguous because it is unclear what it is trying to say about functions such as isdigit for which there are no notes about locale-specific aspects. It might be trying to say that such functions must be assumed to be locale dependent, in which case Microsoft's implementation of isdigit would be OK. (Except that the footnote I mentioned earlier seems to contradict this interpretation.)

    Read the article

  • Why can't I handle a KeyboardInterrupt in python?

    - by Josh
    I'm writing python 2.6.6 code on windows that looks like this: try: dostuff() except KeyboardInterrupt: print "Interrupted!" except: print "Some other exception?" finally: print "cleaning up...." print "done." dostuff() is a function that loops forever, reading a line at a time from an input stream and acting on it. I want to be able to stop it and clean up when I hit ctrl-c. What's happening instead is that the code under except KeyboardInterrupt: isn't running at all. The only thing that gets printed is "cleaning up...", and then a traceback is printed that looks like this: Traceback (most recent call last): File "filename.py", line 119, in <module> print 'cleaning up...' KeyboardInterrupt So, exception handling code is NOT running, and the traceback claims that a KeyboardInterrupt occurred during the finally clause, which doesn't make sense because hitting ctrl-c is what caused that part to run in the first place! Even the generic except: clause isn't running. EDIT: Based on the comments, I replaced the contents of the try: block with sys.stdin.read(). The problem still occurs exactly as described, with the first line of the finally: block running and then printing the same traceback.

    Read the article

  • What problem did MS solve by creating PowerShell? [closed]

    - by Fred
    I'm asking because PowerShell confuses me. I've been trying to write some deployment scripts using PowerShell and I've been less than enthused by the result. I have a co-worker who loves PowerShell and defends it at every turn. Said co-worker claims PowerShell was never written to be a strong shell, but instead was written to: a) Allow you to peek and poke at .NET assemblies on the command-line (why is this a reason for PowerShell to exist?) b) To be hosted in .NET applications for automation, similar to DCOP in KDE and how Gnome is using CORBA. c) to be treated as ".NET script" rather than as an actual shell (related to b). I've always felt like Windows was missing a decent way to bang out automation scripts. cmd is too simplistic in many cases, and WSH is too obtuse (although the combination can be used successfully, I'm not a fan). When I first heard about PowerShell I felt like Windows was finally getting a decent shell that would be able to help with automation of many tasks, but recent experiences, and my co-worker, tell me otherwise. To be clear, I don't take issue with the fact that it's built on .NET, or that it passes objects around rather than text (despite my Unix background :]), and I'm not arguing that PowerShell is useless, but from what I can see, it doesn't solve the problem I was hoping it would solve very well. As soon as you step outside of the .NET/Powershell world, things quit being nice and cozy for you. So with all that out of the way, what problem did MS solve by creating PowerShell, or is it some political bastard child as I suspect? I've googled and haven't hit upon anything that sufficiently answered that for me, but the more citations the better.

    Read the article

  • Does anyone still believe in the Capability Maturity Model for Software?

    - by Ed Guiness
    Ten years ago when I first encountered the CMM for software I was, I suppose like many, struck by how accurately it seemed to describe the chaotic "level one" state of software development in many businesses, particularly with its reference to reliance on heroes. It also seemed to provide realistic guidance for an organisation to progress up the levels improving their processes. But while it seemed to provide a good model and realistic guidance for improvement, I never really witnessed an adherence to CMM having a significant positive impact on any organisation I have worked for, or with. I know of one large software consultancy that claims CMM level 5 - the highest level - when I can see first hand that their processes are as chaotic, and the quality of their software products as varied, as other, non-CMM businesses. So I'm wondering, has anyone seen a real, tangible benefit from adherence to process improvement according to CMM? And if you have seen improvement, do you think that the improvement was specifically attributable to CMM, or would an alternative approach (such as six-sigma) have been equally or more beneficial? Does anyone still believe? As an aside, for those who haven't yet seen it, check out this funny-because-its-true parody

    Read the article

  • PHP/json My field names are being truncated to 30 characters. Can I stop this?

    - by Biff MaGriff
    Hi Everyone! Ok so I got this piece of vendor software that they said should be run on an apache php server and MySql database. I didn't have either of those so I put it on a PHP IIS server and I converted the code to work on SQL server. ex. mysql_select_db - mssql_select_db (among other things) So I have the following code in a php file $query = "SELECT * FROM TableName WHERE KEY_FIELD = '".$keyField."';"; $result = mssql_query($query); $arr = array(); while ( $obj = mssql_fetch_object($result) ) { $arr[] = $obj; } echo '{"results":'.json_encode($arr).'}'; and my results look something like this (captured with fiddler 2) {"results":[{"KEY_FIELD":"57", "My30characterlongfieldthatiscu":"GoodValue"}]} "My30characterlongfieldthatiscu" should be "My30characterlongfieldthatiscutoff" Kinda weird, no? The vendor claims that the app works perfectly on their end. I'm thinking this is some sort of IIS PHP limit, is there a way around it or can I expand it? I found this solution http://www.php.net/manual/en/ref.mssql.php#74834 but I don't understand it. Thanks!

    Read the article

  • How do I return a variable in javascript?

    - by bmckim
    I am working with the google maps API and whenever I return the variable to the initialize function from the codeLatLng function it claims undefined. If I print the variable from the codeLatLng it shows up fine. var geocoder; function initialize() { geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(40.730885,-73.997383); var addr = codeLatLng(); document.write(addr); } function codeLatLng() { var latlng = new google.maps.LatLng(40.730885,-73.997383); if (geocoder) { geocoder.geocode({'latLng': latlng}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { if (results[1]) { return results[1].formatted_address; } else { alert("No results found"); } } else { alert("Geocoder failed due to: " + status); } }); } } prints out undefined If I do: var geocoder; function initialize() { geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(40.730885,-73.997383); codeLatLng(); } function codeLatLng() { var latlng = new google.maps.LatLng(40.730885,-73.997383); if (geocoder) { geocoder.geocode({'latLng': latlng}, function(results, status) { if (status == google.maps.GeocoderStatus.OK) { if (results[1]) { document.write(results[1].formatted_address); } else { alert("No results found"); } } else { alert("Geocoder failed due to: " + status); } }); } } prints out New York, NY 10012, USA

    Read the article

  • R: building a simple command line plotting tool/Capturing window close events

    - by user275455
    I am trying to use R within a script that will act as a simple command line plot tool. I.e. user pipes in a csv file and they get a plot. I can get to R fine and get the plot to display through various temp file machinations, but I have hit a roadblock. I cannot figure out how to get R to keep running until the users closes the window. If I plot and exit, the plot disappears immediately. If I plot and use some kind of infinite loop, the user cannot close the plot; he must exit by using an interrupt which I don't like. I see there is a getGraphicsEvent function, but it claims that the device is not supported (X11). Anyway, it doesn't appear to actually support an onClose event, only onMouseDown. Any ideas on how to solve this? edit: Thanks to Dirk for the advice to check out the tk interface. Here is the test code that works: require(tcltk) library(tkrplot) ##function to display plot, called by tkrplot and embedded in a window plotIt<-function(){ plot(x=1:10, y=1:10) } ##create top level window tt<-tktoplevel() ##variable to wait on like a condition variable, to be set by event handler done <- tclVar(0) ##bind to the window destroy event, set done variable when destroyed tkbind(tt,"",function() tclvalue(done) <- 1) ##Have tkrplot embed the plot window, then realize it with tkgrid tkgrid(tkrplot(tt,plotIt)) ##wait until done is true tkwait.variable(done)

    Read the article

  • Compiling scalafx for Java 7u7 (that contains JavaFX 2.2) on OS X

    - by akauppi
    The compilation instructions of scalafx says to do: export JAVAFX_HOME=/Path/To/javafx-sdk2.1.0-beta sbt clean compile package make-pom package-src However, with the new packaging of JavaFX as part of the Java JDK itself (i.e. 7u7 for OS X) there no longer seems to be such a 'javafx-sdkx.x.x' folder. The Oracle docs say that JavaFX JDK is placed alongside the main Java JDK (in same folders). So I do: $ export JAVAFX_HOME=/Library/Java/JavaVirtualMachines/jdk1.7.0_07.jdk $ sbt clean [warn] Using project/plugins/ (/Users/asko/Sources/scalafx/project/plugins) for plugin configuration is deprecated. [warn] Put .sbt plugin definitions directly in project/, [warn] .scala plugin definitions in project/project/, [warn] and remove the project/plugins/ directory. [info] Loading project definition from /Users/asko/Sources/scalafx/project/plugins/project [info] Loading project definition from /Users/asko/Sources/scalafx/project/plugins [error] java.lang.NullPointerException [error] Use 'last' for the full log. Project loading failed: (r)etry, (q)uit, (l)ast, or (i)gnore? Am I doing something wrong or is scalafx not yet compatible with the latest Java release (7u7, JavaFX 2.2). What can I do? http://code.google.com/p/scalafx/ Addendum ..and finally (following Igor's solution below) sbt run launches the colorful circles demo easily (well, if one has a supported GPU that is). Oracle claims that "JavaFX supports graphic hardware acceleration on any Mac OS X system that is Lion or later" but I am inclined to think the NVidia powered Mac Mini I'm using does software rendering. A recent MacBook Air (core i7) is a complete different beast! :)

    Read the article

  • What database options do I have for the Blackberry?

    - by peeping-jane
    I notice most of the discussions about Blackberry database options are old, and generally not too informative. As of today, March 31st, 2010, what is the best, most universally supported, free database option available for Blackberry developers? I heard SQLite is available for JDE v5, but last I checked, that was still in beta, and I didn't want to commit to developing on a system that is not supported by most of the phones in service. Thing is, I don't see any dates on these claims. For all I know, the announcements I am reading are from 2008. So, I am still on v 4.7. I need to use a relational DB for the app I am developing, but there aren't many resources for DB handling available - or at least resources that are useful to me. I find a lot of "tutorials" that assume you know everything there is to know about Blackberry development, or Java. But no complete classes or anything. Many of these examples don't even work. Eclipse gives warnings and errors from code copied and pasted from other people's examples. I can answer any questions that may assist in this case. Hopefully, this thread will help many BB developers in the future.

    Read the article

  • If Statements Skipping or Evaluating Strangely, JavaScript and jquery

    - by tlm2021
    So in jQuery, I have a global variable "currentSubNav" that stores a current visible element. The following code executes on "mouseenter". I need it to get store element's ID, check to see if there was one. If there wasn't, set the new visible element to the default. $('#mainMenu a').mouseenter(function() { var newName = $(this).attr("id"); if(newName == ''){ var newName = "default"; } Then it checks to see if the new element matches the current one. If so, it returns. If not, it performs the animations to bring in the new one. if(newName == currentSubNav){ return; }else{ $("div[name=" + currentSubNav + "]").animate({"left": "+=600px", "opacity": "toggle"}, "slow"); $("div[name=" + newName + "]").css({"margin-top": "0"}); $("div[name=" + newName + "]").fadeIn(2000); $("div[name=" + currentSubNav + "]").animate({"left": "-=600px"}, 0); currentSubNav = newName; return; } }); I'm using Chrome at the moment, and according to the dev tools that isn't what happens. Problem #1 "$(this).attr("id");" isn't returning undefined as the documentation claims. It seems to be returning "". BUT, when I have the if statement as I do above, it skips over the statement entirely. I set a breakpoint, but it never pauses execuation, so the statement is never evaluated. Problem #2 After the animations occur, instead of using the return at the end of the statements it goes back and uses the return for the "newName == currentSubNav" if statement. I guess that not a big deal, but it's not the intended behavior. I'm fairly new to JavaScript, and it appears I'm missing something about how JavaScript works. But I can't find what anywhere. Any help? Blockquote

    Read the article

  • How to lazy process an xml documentwith hexpat?

    - by Florian
    In my search for a haskell library that can process large (300-1000mb) xml files i came across hexpat. There is an example in the Haskell Wiki that claims to -- Process document before handling error, so we get lazy processing. For testing purposes i have redirected the output to /dev/null and throw a 300mb file at it. Memory consumption kept rising until i had to kill the process. Now i removed the error handling from the process function: process :: String -> IO () process filename = do inputText <- L.readFile filename let (xml, mErr) = parse defaultParseOptions inputText :: (UNode String, Maybe XMLParseError) hFile <- openFile "/dev/null" WriteMode L.hPutStr hFile $ format xml hClose hFile return () As a result the function now uses constant memory. Why does the error handling result in massive memory consumption? As far as i understand xml and mErr are two seperate unevaluated thunks after the call to parse. Does format xml evaluate xml and build the evaluation tree of 'mErr'? If yes is there a way to handle the error while using constant memory? http://www.haskell.org/haskellwiki/Hexpat/

    Read the article

  • Tracking a fragment of a file in two places with git

    - by mabraham
    Hi, I have code such as void myfunc() { introduction(); while(condition()) { complex(); loop(); interior(); code(); } cleanup(); } which I wish to duplicate into two versions, viz: void myfuncA() { introduction(); minorchangeA(); while(condition()) { complex(); loop(); interior(); code(); } cleanup(); } void myfuncB() { introduction(); minorchangeB(); while(condition()) { complex(); modifiedB(); loop(); interior(); code(); } cleanup(); extracleanupB(); } git claims to track content rather than files, so do I need to tell it that there are chunks here that are common to both myfuncA and myfuncB so that when merging with upstream changes to myfunc that those changes should propagate to both myfuncA and myfuncB? If so, how? The code could be written so that myfuncAB did the correct thing at each point by testing for condition A or B, but that could seriously hinder readability or performance.

    Read the article

  • Configuring visual subversion to exclude

    - by douglasrahn
    I have been searching for any documentation on how to exclude files for visual svn but have not found any. All the documentation I find seems to not match my file structure or I am missing some files/directories referenced. For example the only file I find with configuration items in it seems to be completely commented out and missing the miscellaneous section as well as any auto properties enable - etc... Ultimately I need to exclude some files so that my development can continue without SVN errors. I am constantly receiving errors for pbuser and other project files and would like to make sure this is not causing some of my other headaches. Here is information I would love to use but cant as it doesnt match: How to “fix” Subversion in XCode 3 Posted on December 10, 2008 by Rodney Aiglstorfer in Xcode If you don’t take the necessary steps to prepare for subversion, you will run into problems using it in XCode. This is because XCode produces files that “confuse” Subversion because it either thinks they are text files when they are really binary files or the reverse. To overcome these limitations, you need to make some simple changes to the subversion configuration file in your user home directory. Here are some steps you can follow to ensure that you will be able to use Subversion within XCode without any issues. Step 1. Open the subversion configuration file ~/.subversion/config NOTE: If the “.subversion” directory doesn’t exist yet then run this command which fails but will create the necessary files to get you started: svn status Step 2. Enable “global-ignores” and add new things to ignore Find the line that contains the text “global-ignores” and append the following text: build *~.nib *.so *.pbxuser *.mode* *.perspective* What I am looking for really is how to exclude the files I know I need to for Visual subversion - it shouldnt be different really from regular svn as it claims to use the same product just places a gui on it.

    Read the article

  • Double Layer DVD+R burning problem - I/O Error

    - by Mehper C. Palavuzlar
    I have a modern PC (Quad Core CPU, 4 GB RAM, Win7 Home Premium 64-bit) but I have a problem with burning .dvd images to Double Layer (8.5 GB) DVDs. I wasted too many DVD+R DL discs but to no avail. Here is a short explanation of what I did: I'm using ImgBurn v2.5.0.0 (latest version). I'm trying to burn an image file (.dvd) which is together with the related .iso file in the same folder. In ImgBurn, I select the file with .dvd extension, and set writing speed to 2.4x. Burning process starts normally, but around 7% of the process, it gives a I/O Write Error, which is as follows: I wasted 3 discs (Magic, Made in Taiwan, DVD+R DL, 8.5 GB) trying the same thing. My DVD writer is LG GH22NP20 with IDE connection type. I updated its firmware from 1.04 to 2.00 but no success in burning again. Then my cousin brought his LG (an older model) which, he claims, was successful in writing DL discs with the same brand (Magic). I plugged off my LG and plugged the older one in, and tried to burn the image again. It also gave an I/O Error even without standing till 7%. I tried another burning program (CloneCD), but failed again. Then I bought other brands (TDK and VERBATIM) and tried to burn the image. Burning process started successfully, but around 14% (for Verbatim) and 25% (for TDK) failed again. Here is a screeny from ImgBurn: I've burned lots of 4.7 GB DVD+Rs and DVD-Rs successfully, even without a single error, with this LG DVD writer, but this case is very bothering for me. What should I do? Should I buy a new DVD writer other than LG? Could this be related to Windows or my hardware configuration? Thanks for your help. Edit: My burner works on my cousin's machine. So the problem must be related to my system. What could be the reason? Latest news: I borrowed an external USB DVD writer from a friend, which is PHILIPS SPD3000CC (an old model). Guess what! It's burning DVD+R DLs successfully! How come an internal DVD writer of a brand new computer system cannot burn DL DVDs? Now I'm considering buying a new internal DVD writer with not IDE, but SATA connection...

    Read the article

  • Why is site serving different SSL certs to different browsers?

    - by TRiG
    The SSL certificate on menswearireland.com and on www.menswearireland.com works fine on Safari, Chrome, SeaMonkey, K-Meleon, QtWeb, Firefox, and Opera. However, Internet Explorer claims that there is an error: The security certificate presented by this website was not issued by a trusted certificate authority. The security certificate presented by this website was issued for a different website's address. Security certificate problems may indicate an attempt to fool you or intercept any data you send to the server. Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.0; Trident/5.0) Another site hosted on the same managed server shows no errors: achill-fieldschool.com and www.achill-fieldschool.com work fine on IE, even though as far as I can tell the certificate is set up identically. What am I doing wrong? This is a LAMPP server running Plesk. It looks like the server is showing different certificates to different clients. To some clients it shows a RapidSSL certificate made out to www.menswearireland.com with menswearireland.com as a valid alternative name. To other clients, it shows a Parallels Panel certificate, made out to Parallels Panel. Here are results from a few different online SSL checkers: most say it's fine, while two show errors. Three online checkers say it's valid Comodo SSL Check shows it as valid DigiCert SSL Check shows it as valid SSL Shopper SSL Check shows it as valid Common name: www.menswearireland.com SANs: www.menswearireland.com, menswearireland.com Valid from October 2, 2012 to November 4, 2013 Serial Number: 559425 (0x88941) Signature Algorithm: sha1WithRSAEncryption Issuer: RapidSSL CA Another online checker seems to see a completely different certificate GeoCerts SSL Check shows it as invalid Common name: Parallels Panel Organization: Parallels Valid from August 15, 2012 to August 15, 2013 Issuer: Parallels Panel Another online checker sees more than one certificate Symantic SSL Check shows it as invalid The certificate installation checker connected to the Web server and read its certificates, but could not determine which is the primary certificate for the Web server. Incidentally, on both menswearireland.com and achill-fieldschool.com the homepage will redirect from HTTPS to HTTP. To see SSL details, visit the page /account on both (that page will redirect from HTTP to HTTPS). I’ve found more information in a more detailed online SSL checker. https://www.ssllabs.com/ssltest/analyze.html?d=menswearireland.com This site works only in browsers with SNI support My understanding is that SNI (RFC 6066) is a method for putting many SSL sites on one shared IP address and port. This does not work on Internet Explorer on older versions of Windows (this has to do with the version of Windows, not the version of Internet Explorer). However, all our SSL sites are on a unique IP address, so we shouldn’t need SNI.

    Read the article

  • Why do I need to set up Autologon values in registry twice in before it works and can I fix this?

    - by jJack
    Background: As part an automated testing suite I am building, I need to set up Autologon on my virtual machines 'on demand'. By on demand, I mean that I don't want to necessarily pre-configure my VM or any snapshot to have Autologon set up already, for security reasons and also a huge business case. My solution so far: I'm copying a script to the guest machine and then using Sysinternals PsExec to execute it. The script is: reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultUserName /t REG_SZ /d myusername reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultPassword /t REG_SZ /d myfakepassword reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v DefaultDomainName /t REG_SZ /d mydomain reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v ForceAutoLogon /t REG_SZ /d 1 reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon" /f /v AutoAdminLogon /t REG_SZ /d 1 reg add "hklm\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon\AutoLogonChecked" /f /ve /d 1 Note: I don't believe AutoLogonChecked is required for machines post Windows 2000 but I'm doing it just in case for now. Maybe ForceAutoLogon isn't either, not sure yet. The Problem: I see PsExec executes this properly and all the values are in the registry, however when I restart the machine, the user isn't automatically logged on...When I run this a second time then restart the machine, the user is finally logged on. A diff between the registry states shows that the first time I run this, it is missing both the "1" for AutoAdminLogon, and also the DefaultPassword key. The second time I execute it, these values are correctly intact as I intended. So, what is going on here? Is this expected? This post claims in the end that it really all just works (the problem was that a logoff script was setting off the values). Doesn't seem to work for me however. Note this seems unique to Windows 7, does not occur in Windows XP Also note that you don't need PsExec to recreate the issue - just modify the registry yourself EDIT/update: Login interactively and run script (so, not executing it remotely), logging off automatically logs me back in (so, it works) remotely execute the script in guest when I'm interactively logged in, logging off automatically logs me back in (so, it works) remotely execute the script in guest when with non-interactive session if I log in afterwards (so, interactive now) then back off, it logs me back in (so, it then works) EDIT/update 2: This only occurs for Win7x86, Win7x64, Win8x64. This does not occur for Windows XP

    Read the article

  • What are some of the best wireless routers for a price-conscious home power-user?

    - by Alain
    I'm extremely dissatisfied with the 'popular' choice for routers in homes and small offices. They are expensive (upwards of 60$), lack a great deal of useful configuration options, and seem to need to be restarted quite often. (Linksys comes to mind). I've been on the market for a good router lately, and slowly collecting a set of requirements I feel good routers should meet. Maximum number of TCP/IP connections. - This isn't something I see any routers advertise, but in terms of supporting torrent applications, I've been screwed by routers that support less than 20 here. From what I understand a fairly standard number is 200, but there are not so expensive routers that support thousands. Router configuration menu - Most have standard menu's that let you set up basic things like your wireless network encryption settings, uPnP, and maybe even DMZ (demilitarized zones). An absolute requirement for me, however, are routers with good enough firmware to support: Explicit Port forwarding Assigning static local ips to specific mac addresses, or at least Port forwarding by MAC address Port, IP and MAC filtering Dynamic DNS service for home users who want to set up a server but have a dynamic IP Traffic shaping (ideally) - giving priority to packets from certain machines or over certain ports. Strong wireless signal - If getting a reliable signal requires me to be so close to the router that I can connect an Ethernet cable, it's not good enough. As many Ethernet ports as possible. - Because I want to be able to switch from console gaming to PC gaming without visiting my router. So far, the best thing I've stumbled upon (in the bargain bin at staples) was a 20$ retail plus router. It was meant to be the cheapest alternative until I could find something better to purchase online, but I was actually blown away by the firmware capabilities. It supports defining reserved bandwidth for certain network traffic, dynamic DNS, reserving local IPs for specific MAC addresses, etc. At 2 am when my roommate is killing our Internet with their torrents, I can limit their bandwidth without outright blacklisting them. I have, however, met serious limitations when it comes to network traffic between local machines. It claims a 300Mbps connection, but I have trouble streaming videos from my PC to my console or other laptops wirelessly. It has a meltdown and needs to be reset once in a while (no more than a couple times a month), and it's got a 200 connection limit. There 4 Ethernet ports in the back but I'm pretty sure the first doesn't work. So some great answers to this question would be: Any metrics you use to compare routers, and requirements you have for new candidates. The best routers you've found for supporting home servers, file management systems, high volume torrent traffic, good price/feature ratio, etc. Good configuration advice (aside from 'use Ethernet whenever possible') Thanks for your feedback and experiences!

    Read the article

  • Unusual Apache->Tomcat caching issue.

    - by iftrue
    Right now, I have an Apache setup sitting in front of Tomcat to handle caching. This setup has been given to an external service to manage, and since the transition, I've noticed odd behavior. Specifically, when I request a swf file from the web server, I hit the Apache cache (good), but occasionally I'll receive a truncated file. Once I receive this truncated file, the cache will NOT refresh until I manually delete the cache and let the swf pull down from tomcat again. The external service claims that the configuration is fine, but I don't see any way this could be happening aside from improper configuration. Now, there are two apache and two tomcat servers under a load balancer, and occasionally one apache cache will break while another does not (leading to 50% of all requests getting bad, truncated data). Where should I start looking to debug this issue? What could POSSIBLY be causing this odd behavior? Edit: Inspecting the logs, tomcat throws this: java.io.IOException: Bad file number at java.io.FileInputStream.readBytes(Native Method) at java.io.FileInputStream.read(FileInputStream.java:199) at java.io.BufferedInputStream.read1(BufferedInputStream.java:256) at java.io.BufferedInputStream.read(BufferedInputStream.java:317) at java.io.FilterInputStream.read(FilterInputStream.java:90) at org.apache.catalina.servlets.DefaultServlet.copyRange(DefaultServlet.java:1968) at org.apache.catalina.servlets.DefaultServlet.copy(DefaultServlet.java:1714) at org.apache.catalina.servlets.DefaultServlet.serveResource(DefaultServlet.java:809) at org.apache.catalina.servlets.DefaultServlet.doGet(DefaultServlet.java:325) at javax.servlet.http.HttpServlet.service(HttpServlet.java:690) at javax.servlet.http.HttpServlet.service(HttpServlet.java:803) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:175) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:568) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.ha.session.JvmRouteBinderValve.invoke(JvmRouteBinderValve.java:209) at org.apache.catalina.ha.tcp.ReplicationValve.invoke(ReplicationValve.java:347) at org.terracotta.modules.tomcat.tomcat_5_5.SessionValve55.invoke(SessionValve55.java:57) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:286) at org.apache.jk.server.JkCoyoteHandler.invoke(JkCoyoteHandler.java:190) at org.apache.jk.common.HandlerRequest.invoke(HandlerRequest.java:283) at org.apache.jk.common.ChannelSocket.invoke(ChannelSocket.java:767) at org.apache.jk.common.ChannelSocket.processConnection(ChannelSocket.java:697) at org.apache.jk.common.ChannelSocket$SocketConnection.runIt(ChannelSocket.java:889) at org.apache.tomcat.util.threads.ThreadPool$ControlRunnable.run(ThreadPool.java:690) at java.lang.Thread.run(Thread.java:619) followed by access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:00:27:32 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:01:27:33 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:01:39:53 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - access_log.2009-12-14.txt:1.2.3.4 - - [14/Dec/2009:02:27:38 -0500] "GET /myApp/mySwf.swf HTTP/1.1" 304 - So apache is caching the bad file size. What could possibly be causing this, and possibly separate, how do I ensure that this exception does not get written to cache?

    Read the article

  • Is real-time or synchronous replication possible over WAN link?

    - by johnnyb10
    The company I work for is looking to implement truly real-time file replication with file locking over a WAN link that spans over 2000 miles. We currently have a 16-drive SAN setup in our east coast office. We also have an office out in Colorado that will have the same exact SAN setup. The idea is to have those two SANs contain the same exact data at all times, which will allow us to work with the same data pool, and which will also provide use with an offsite backup solution, should a failure occur on either end. We're running Server 2008. The objective is to enable users in the east coast office to work on files and have those changes be instantly updated on the Colorado SAN as well. We also need there to be file locking so that there will be no conflicts or overwritten changes if users attempt to work on the same file. Is this scenario even possible, at speeds that would make the files usable? And if so, what software would we need to pull this off? As I understand it, DFS-R does not provide file locking, so if we used that, we would need to go with a third-party product like Peerlock. But I don't even know if DFS-R is an option. Can it replicate quickly enough over a WAN link? Can any product? It seems that if we were to use synchronous replication, the programs would be unacceptably slow, as every write would have to wait for confirmation from the other end of the link. But if we used asynchronous replication, what kind of latency would we be looking at? There is a product from GlobalScape called WAFS that claims to provide "File coherence with real-time file locking, file release, and synchronization" and says that "As files are modified, changes are mirrored instantly using intelligent byte-level differencing to minimize the impact on network bandwidth". So this sounds like synchronous replication, but that doesn't even seem possible, given physical limitations such as the speed of light. If anyone has any experience with this kind of setup, or knows whether it's even possible, I'd appreciate your input and suggestions, including recommendations for software that we should check out.

    Read the article

  • Does 64bit Windows 8 have the same 75% memory-usage limitation for applications as Windows 7?

    - by Barleyman
    64bit Windows 7 (and Windows Vista) have a built-in limit of not being able to use the last 25% of RAM. You will get a low memory warning when you get close to the limit. Even if you disable that warning, applications will run out of memory and crash since the OS will refuse to allocate memory from that last 25%. That was fine when Vista was designed, when machines had 1 GB of total memory, but is pretty daft for today's 8 GB machines. Yes, the system will run cache, etc. on that extra 2 GB, but running out of memory when you have "merely" 2 GB left.... NB: this has nothing to do with the page file. If you limit the page file to a sensible size like 2 GB, you will still see this behavior. The system will cram the page file to the last byte while refusing to touch that 1/4th of the RAM. Does Windows 8 change this behavior? Is there now some fixed minimum free RAM requirement, like 512 MB, or is it still 25%? Can you actually adjust the low memory limit? EDIT: Here is another older post here which discusses this same behavior on Windows 7. There is fixed 25% limit in Windows 7 and I'd like to know if it's still in Windows 8. Windows 7 / Page File Disabled / 12 GB RAM / 2+ GB RAM free and "your computer is running low on memory" Edit2: Here is another link discussing the low memory warning and how to disable it. Note he claims the limit for RAM usage is 80%, not 75%. It would seem to be correct as you can in fact allocate 6.4GB of RAM with 8GB machine. Anything above and beyond that goes to the pagefile, though. http://halflight.com.au/2011/04/06/how-to-disable-low-memory-warnings-and-the-advantages-of-removing-the-page-file/ Edit3: a Here's couple of process explorer screenshots that demonstrate how it goes down. Exhibit1: https://dl.dropbox.com/u/42068601/sysinfo.jpg Exhibit2: https://dl.dropbox.com/u/42068601/sysint2.jpg You can see that Windows 7 will use the memory 6.4GB as the very last resort. I have low memory warning switched off here so programs crashed at the last screenshot allocation. With low memory warning turned on, it starts nagging before you can push OS to use that remaining 1.6GB. The question is not "Is it OK windows does not want to allocate last 20% of RAM because X", it's "Does Windows 8 still behave this way". With 16GB this really becomes dumb.

    Read the article

  • FTP script needs blank line

    - by Ones and Zeroes
    I am trying to determine the reason for some FTP servers requiring a blank line in the script as follows: open server.com username ftp_commands bye Refer to blank line required after username credentials. Example from: FTP from batch file another reference to the same: http://newsgroups.derkeiler.com/Archive/Comp/comp.sys.ibm.as400.misc/2008-05/msg00227.html Also discussed here: archive.midrange.com/midrange-l/200601/msg00048.html "The behavior I'm observing is the same as if I didn't specify the password to login." with an answer referring to our same fix... archive.midrange.com/midrange-l/200601/msg00053.html and archive.midrange.com/midrange-l/200601/msg00065.html Note: It is my experience that FTP questions attract uncouth responses. Admittedly FTP is outdated, but many clients still have legacy systems, which they cannot upgrade or replace. The reason thereof should not be discussed here. The intention of this question is to invite a positive response. Please do not respond if you disagree with the above. If you have never encountered this same issue, please do not respond. I suspect this may be limited to FTP scripts executed from Windows machines, but have been told that this happens often and with many different servers. My specific interest is to understand what may cause this as I have a real world example of a production system suddenly requiring this as a workaround fix, after running for many years without issue. The server belongs to a third party who claims no change on their end. Server details unknown and cannot be determined. Any help or encouragement from someone who has come across the same, would be appreciated. ps. Sorry for the many words and references to painful responses, but I have asked similar questions on serverfault and elsewhere and unfortunately got back kneejerk responses to FTP and respondents debating the validity of the question. I would truly not ask, or re-post this question online if I had a better understanding of the issue. I know of people who have seen this issue, but don't know what causes it. I am wary that this question would again turn into another irrelevant discussion. Please, I ask very nicely: Please do not respond if you have not encountered a similar issue. FURTHER EDIT: Please do not suggest changing the product. The problem is not the blank line requirement. We know this fixes the issue. The problem is not being able to explain the reason for the blank line in the first place. Slight difference, but a critical point to note wrt the answering of this question.

    Read the article

  • Is it possible to download and run IPhone apps on an IPhone emulator

    - by Adrian Grigore
    Hi, I am tasked to provide an IPhone client app for our SaaS website. I have never written an IPhone application, nor do I have an IPhone at the moment. Before I can decide whether or not I want to do this myself or outsource this, I'd like to try a few apps myself to get a feeling for the UI. Is there any IPhone emulator I might use to download and run apps from the App Store? I do have an Intel-based Mac if that helps.

    Read the article

< Previous Page | 382 383 384 385 386 387 388 389 390 391 392 393  | Next Page >