Search Results

Search found 2200 results on 88 pages for 'human factors'.

Page 75/88 | < Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >

  • Scalably processing large amount of comlpicated database data in PHP, many times a day.

    - by Eph
    I'm soon to be working on a project that poses a problem for me. It's going to require, at regular intervals throughout the day, processing tens of thousands of records, potentially over a million. Processing is going to involve several (potentially complicated) formulas and the generation of several random factors, writing some new data to a separate table, and updating the original records with some results. This needs to occur for all records, ideally, every three hours. Each new user to the site will be adding between 50 and 500 records that need to be processed in such a fashion, so the number will not be steady. The code hasn't been written, yet, as I'm still in the design process, mostly because of this issue. I know I'm going to need to use cron jobs, but I'm concerned that processing records of this size may cause the site to freeze up, perform slowly, or just piss off my hosting company every three hours. I'd like to know if anyone has any experience or tips on similar subjects? I've never worked at this magnitude before, and for all I know, this will be trivial to the server and not pose much of an issue. As long as ALL records are processed before the next three hour period occurs, I don't care if they aren't processed simultaneously (though, ideally, all records belonging to a specific user should be processed in the same batch), so I've been wondering if I should process in batches every 5 minutes, 15 minutes, hour, whatever works, and how best to approach this (and make it scalable in a way that is fair to all users)?

    Read the article

  • Is there any way to configure what reCAPTCHA is actually displaying?

    - by trejder
    Is there any way to control, what kind of image is displayed to user in reCAPTCHA or what kind of puzzle he/she is required to solve? I noticed at least two significant changes to what reCAPTCHA is serving (and I must admit, that I don't much like these changes): For years reCAPTCHA was serving two words from scanned books and user was required to solve one of them. They were clearly readable (even those "second" ones, that could be ommitted) and with nearly no problem in solving them by a human. For past few month, I noticed a significant change at all of my sites, that are using reCAPTCHA. They started to show combination of computer-generated long numbers string and something, that looks for me as street/house number photographed in Google StreetView. They're even easier to solve, but what is most important -- it started to happen more and more often that user is obligated to solve both of them. Now, I have noticed another change/regression. Now some of my sites remain at so called "level 2" (like above) and some of them started to serve two words again ("level 1"?). And again, there are more and more situations, where solving both words is required. But, what is most important, on this "level" words are nearly impossible to solve (on my old mobile devices with 3.5'' display I need 5-6 attempts to pass on!). They're cluttered, written in some strange font, mostly in italics with a lot of black and white stains or drops on letters etc. Plus: reCAPTCHA stopped to be equal -- some of my pages are still serving "level2" while some of them are "killing" end users with a need to solve "level3". Is there anyway, I can control this -- force it to use only "level2" and on all my pages? (of course, I'm using exactly the same piece of code to serve reCAPTCHA on all my pages) Note, that I'm not asking for something like in this question. I don't want to change what reCAPTCHA shows (to disable words in favor of only numbers for example). I only want to control, which "version" of puzzles (among described above) reCAPTCHA shows and I want to make it equal on all my sites.

    Read the article

  • Recover lost code from compiled apk

    - by AlexRamallo
    I have an issue here..and its making me really nervous. I was working on this game, and it was going great, so I took a copy of it on my laptop to work do some work while away from my computer. long story short, hard-drive failure + poor back ups led to me losing a very important class. Is there a way to decompile the apk to retrieve the bit of code that was lost? It isn't overly complicated or sophisticated, its just that its impossible to re-write it without reading every. single. line. of. code. in the entire application since it initializes a LOT of classes and loads a bunch of stuff in a specific way. With a quick google search I was able to find apktool, which decompiles it into a bunch of .smali files, which I don't think were designed for human reading. All I need to recover is one very big method in the class. I found the smali file that contains it and I think I found the line where it starts. something like .method public declared-synchronized load(Lcom/X/X/game/X;)I Anyone help would be appreciated since I would have to scrap the entire game without this method.

    Read the article

  • My method is too specific. How can I make it more generic?

    - by EricBoersma
    I have a class, the outline of which is basically listed below. import org.apache.commons.math.stat.Frequency; public class WebUsageLog { private Collection<LogLine> logLines; private Collection<Date> dates; WebUsageLog() { this.logLines = new ArrayList<LogLine>(); this.dates = new ArrayList<Date>(); } SortedMap<Double, String> getFrequencyOfVisitedSites() { SortedMap<Double, String> frequencyMap = new TreeMap<Double, String>(Collections.reverseOrder()); //we reverse order to sort from the highest percentage to the lowest. Collection<String> domains = new HashSet<String>(); Frequency freq = new Frequency(); for (LogLine line : this.logLines) { freq.addValue(line.getVisitedDomain()); domains.add(line.getVisitedDomain()); } for (String domain : domains) { frequencyMap.put(freq.getPct(domain), domain); } return frequencyMap; } } The intention of this application is to allow our Human Resources folks to be able to view Web Usage Logs we send to them. However, I'm sure that over time, I'd like to be able to offer the option to view not only the frequency of visited sites, but also other members of LogLine (things like the frequency of assigned categories, accessed types [text/html, img/jpeg, etc...] filter verdicts, and so on). Ideally, I'd like to avoid writing individual methods for compilation of data for each of those types, and they could each end up looking nearly identical to the getFrequencyOfVisitedSites() method. So, my question is twofold: first, can you see anywhere where this method should be improved, from a mechanical standpoint? And secondly, how would you make this method more generic, so that it might be able to handle an arbitrary set of data?

    Read the article

  • How to check Internet status in iPhone

    - by Radix
    Hello; I wanted to check whether internet is connected or not using either the SystemConfiguration or the CFNetwork i am not quite sure which one. Then i want to know that if the internet is connected then is it connected through wifi or not. I tried an example where i used the below code -(IBAction) shownetworkStatus { NSURL *url = [NSURL URLWithString:@"www.google.com"]; if (url!=NULL) { lbl.text = @"Connected"; } else { lbl.text = @"notConnected"; } } some say that its not valid as per apple and you have to use the SystemConfiguration Framework, Please let me know what needs to be done. Also i personally think that what i am doing in the above code is not proper as if one day google may also be down due to maintenance or some other factors. Also if you could provide me a link where i could display the name of the WIFI network then it would be really cool. I searched the internet then i got these Reachability.h code which again is a bouncer as i wana learn the concepts not copy paste them Thanks and Regards Radix

    Read the article

  • C As Principal Class For Mac App

    - by CodaFi
    So, I've got a c file raring to go and be the main class behind an all-C mac-app, however, a combination of limiting factors are preventing the application from being launched. As it currently stands, the project is just a main.m and a class called AppDelegate.c, so I entered "AppDelegate" as the name of the principal class in the info.plist, and to my complete surprise, the log printed: Unable to find class: AppDelegate, exiting This would work perfectly well in iOS, because the main function accepts the name of a delegate class, and handles it automatically, but NSApplicationMain() takes no such argument. Now, I know this stems from the fact that there are no @interface/@implementation directives in C, and that's really what the OS seems to be looking for, so I wrote a simple NSApplication subclass and provided it as the Principal Class to the plist, and it launched perfectly well. My question is, how could one go about setting a c file as the principal class in a mac application and have it launch correctly? PS, don't ask what or why I'm doing this for, the foundation must be dug. For @millimoose's amusement, here be the AppDelegate.c file: #include <objc/runtime.h> #include <objc/message.h> struct AppDel { Class isa; id window; }; // This is a strong reference to the class of the AppDelegate // (same as [AppDelegate class]) Class AppDelClass; BOOL AppDel_didFinishLaunching(struct AppDel *self, SEL _cmd, void *application, void *options) { self->window = objc_msgSend(objc_getClass("NSWindow"), sel_getUid("alloc")); self->window = objc_msgSend(self->window, sel_getUid("init")); objc_msgSend(self->window, sel_getUid("makeKeyAndOrderFront:"), self); return YES; }

    Read the article

  • Storing large numbers of varying size objects on disk

    - by Foredecker
    I need to develop a system for storing large numbers (10's to 100's of thousands) of objects. Each object is email-like - there is a main text body, and several ancillary text fields of limited size. A body will be from a few bytes, to several KB in size. Each item will have a single unique ID (probably a GUID) that identifies it. The store will only be written to when an object is added to it. It will be read often. Deletions will be rare. The data is almost all human readable text so it will be readily compressible. A system that lets me issue the I/Os and mange the memory and caching would be ideal. I'm going to keep the indexes in memory, using it to map indexes to the single (and primary) key for the objects. Once I have the key, then I'll load it from disk, or the cache. The data management system needs to be part of my application - I do not want to depend on OS services. Or separately installed packages. Native (C++) would be best, but a manged (C#) thing would be ok. I believe that a database is an obvious choice, but this needs to be super-fast for look up and loading into memory of an object. I am not experienced with data base tech and I'm concerned that general relational systems will not handle all this variable sized data efficiently. (Note, this has nothing to do with my job - its a personal project.) In your experience, what are the viable alternatives to a traditional relational DB? Or would a DB work well for this?

    Read the article

  • Logging in to Wordpress through CodeIgniter DX Authentication

    - by whobutsb
    Hello All, I'm about to start a very large project of rebuilding my companies intranet. The plan is to have most of the intranet live in a CI application. I chose to use CI because i'm very familiar with all the CI methods. Some sections of the intranet are going to be wordpress blogs. For example the Human Resources Dept. and the Marketing Dept will have their own wordpress blogs. Ideally my plan is to log on to the intranet, with a CI authentication library like DXAuth by querying the Active Directory of the company. When I return the AD information for the user I will by saving their group memberships into a session. It would be fantastic if I could have that session information of the user be used by wordpress to log the user as an editor if they are a member of the Marketing Group. And allow users who are not members of the group be able to comment on that blog, with out logging into wordpress. My question is if there are any CI classes or Wordpress Plugins, or tutorals out there, of this sort of integration with the two systems. Thank you for your help!

    Read the article

  • Adding variably named fields to Python classes

    - by Carson Myers
    I have a python class, and I need to add an arbitrary number of arbitrarily long lists to it. The names of the lists I need to add are also arbitrary. For example, in PHP, I would do this: class MyClass { } $c = new MyClass(); $n = "hello" $c.$n = array(1, 2, 3); How do I do this in Python? I'm also wondering if this is a reasonable thing to do. The alternative would be to create a dict of lists in the class, but since the number and size of the lists is arbitrary, I was worried there might be a performance hit from this. If you are wondering what I'm trying to accomplish, I'm writing a super-lightweight script interpreter. The interpreter walks through a human-written list and creates some kind of byte-code. The byte-code of each function will be stored as a list named after the function in an "app" class. I'm curious to hear any other suggestions on how to do this as well.

    Read the article

  • What are provenly scalable data persistence solutions for consumer profiles?

    - by Hubbard
    Consumer profiles with analytical scores [ConsumerID, 1..n demographical variables, 1...n analytical scores e.g. "likely to churn" "likely to buy an item 100$ in worth" etc.] have to be possible to query fast if they are to be used in customizing web-sites, consumer communications etc. Well. If you have: Large number of consumers Large profiles with a huge set of variables (as profiles describing human behaviour are likely to be..) ...you are in trouble. If you really have a physical relational database to which you target a query and then a physical disk starts to rotate someplace to give you an individual profile or a set of profiles, the profile user (a web site customizing a page, a recommendation engine making a recommendation..) has died of boredom before getting any observable results. There is the possibility of having the profiles in memory, which would of course increase the performance hugely. What are the most proven solutions for a fast-response, scalable consumer profile storage? Is there a shootout of these someplace?

    Read the article

  • model.matrix() with na.action=NULL?

    - by Vincent
    I have a formula and a data frame, and I want to extract the model.matrix(). However, I need the resulting matrix to include the NAs that were found in the original dataset. If I were to use model.frame() to do this, I would simply pass it na.action=NULL. However, the output I need is of the model.matrix() format. Specifically, I need only the right-hand side variables, I need the output to be a matrix (not a data frame), and I need factors to be converted to a series of dummy variables. I'm sure I could hack something together using loops or something, but I was wondering if anyone could suggest a cleaner and more efficient workaround. Thanks a lot for your time! And here's an example: dat <- data.frame(matrix(rnorm(20),5,4), gl(5,2)) dat[3,5] <- NA names(dat) <- c(letters[1:4], 'fact') ff <- a ~ b + fact # This omits the row with a missing observation on the factor model.matrix(ff, dat) # This keeps the NA, but it gives me a data frame and does not dichotomize the factor model.frame(ff, dat, na.action=NULL) Here is what I would like to obtain: (Intercept) b fact2 fact3 fact4 fact5 1 1 0.7266086 0 0 0 0 2 1 -0.6088697 0 0 0 0 3 NA 0.4643360 NA NA NA NA 4 1 -1.1666248 1 0 0 0 5 1 -0.7577394 0 1 0 0 6 1 0.7266086 0 1 0 0 7 1 -0.6088697 0 0 1 0 8 1 0.4643360 0 0 1 0 9 1 -1.1666248 0 0 0 1 10 1 -0.7577394 0 0 0 1

    Read the article

  • Adding property:value pairs from GooglemapsAPI to an existing json/javascript object

    - by Rockinelle
    I am trying to create a script that finds the closest store/location to a customer using googlemapsAPI. So I've got a json object which is a collection of store objects. Using Jquery's .each to iterate through that object, I am grabbing the driving time from the customer to each store. If it finds the directions, It copies the duration of the drive which is an object with the time in seconds and a human readable value. That appears to work, however when I try to sort through all of those store objects with the drivetime added, I cannot read the object copied from google. If I console.log the whole object, 'closest_store', It shows the values I'm looking for. When I try to read the values directly via closest_store.driveTime, Firebug is outputting undefined in the console. What am I missing here? $.getJSON('<?php echo Url::base() ?>oldservices/get_locations', {}, function(data){ $.each(data, function(index, value) { var end = data[index].address; var request = { origin: start, destination: end, travelMode: google.maps.DirectionsTravelMode.DRIVING }; directionsService.route(request, function(response, status) { //console.log(response); if (status == google.maps.DirectionsStatus.OK) { value.driveTime = response.trips[0].routes[0].duration.value; //console.log(response.trips[0].routes[0].duration.value); }; }); }); var closest_store; $.each(data, function(index, current_store) { if (index == 0) { closest_store = current_store; } else if (current_store.driveTime.value < closest_store.driveTime.value) { closest_store = value }; console.log(current_store.driveTime); } ) });

    Read the article

  • How do I split ONE array to two separate arrays based on magnitude size and a threshold?

    - by youhaveaBigego
    I have an array which has BIG numbers and small numbers in it. I got it from after running a log from WireShark. It is the total number of Bytes of TCP traffic. But Wireshark does not discriminate(it would actually try, and hence it will tell you the traffic stats of ALL types of traffic, but since This is how the Array look like : @Array=qw(10912980 10924534 10913356 10910304 10920426 10900658 10911266 10912088 10928972 10914718 10920770 10897774 10934258 10882186 10874126 8531 8217 3876 8147 8019 68157 3432 3350 3338 3280 3280 7845 7869 3072 3002 2828 8397 1328 1280 1240 1194 1193 1192 1194 6440 1148 1218 4236 1161 1100 1102 1148 1172 6305 1010 5437 3534 4623 4669 3617 4234 959 1121 1121 1075 3122 3076 1020 3030 628 2938 2938 1611 1611 1541 1541 1541 1541 1541 1541 1541 1541 1541 1541 1541 1541 583 370 178) When you look at these this array carefully, one thing is obvious to the human eye. There are really BIG numbers and small numbers. (Basically what I am saying is, there is the 1% class and low income class, no middle class). I want to split the array to two different arrays. That would require me to set a threshold. Array 1 should be ONLY the BIG numbers (10924534-10874126), and array 2 should be the smaller numbers (68157-178). Btw, the array is not sorted. User will NOT input the threshold, and hence should be determined smartly.

    Read the article

  • Format a timestamp into text

    - by user1257114
    I want to get the Modify date of a file and then format it into a human readable date. I am running a C program that gets information on when a particular file was last modified. My C Code contains a sytem cmd which contains a number of egreps, awks, seds separated by pipes. Using sed or awk or something similar, how can I convert 06 to June (This can be any month so an array or something is required) What I am trying to achieve is to end up with a string similar to: My C code contains: char string1[100] = ""; #define MAXCHAR 100 FILE *fp; char str[MAXCHAR], str2[MAXCHAR]; char* filename = "newfile"; /* stat: run 'stat' on the dtlName file to display status information. egrep: search for the pattern 'Modify' and print the lines containing it. awk: Get columns 2 & 3 sed: replace the . with a space, leaving 3 columns of output awk: only print cols 1 & 2 to newfile sed: replace '-' with ' ' in newfile awk: format output in newfile */ sprintf(string1, "/bin/stat %s \ | egrep Modify \ | /bin/awk '{print $2, $3}' \ | /bin/sed 's/\\./ /g' \ | /bin/awk '{print $1, $2}' \ | /bin/sed 's/-/ /g' \ | /bin/awk '{print $3,$2\", \"$1,\"at\",$4}' > newfile" , dtlName); system(string1); fp = fopen(filename, "r"); while (fgets(str, MAXCHAR, fp) != NULL) sprintf(str2,"%s", str); /* Write information to file */ DisplayReportFile (report); ReportEntry (report,L"Source file: %s, Created: %s\n\n",dtlName,str2);

    Read the article

  • What guarantees are there on the run-time complexity (Big-O) of LINQ methods?

    - by tzaman
    I've recently started using LINQ quite a bit, and I haven't really seen any mention of run-time complexity for any of the LINQ methods. Obviously, there are many factors at play here, so let's restrict the discussion to the plain IEnumerable LINQ-to-Objects provider. Further, let's assume that any Func passed in as a selector / mutator / etc. is a cheap O(1) operation. It seems obvious that all the single-pass operations (Select, Where, Count, Take/Skip, Any/All, etc.) will be O(n), since they only need to walk the sequence once; although even this is subject to laziness. Things are murkier for the more complex operations; the set-like operators (Union, Distinct, Except, etc.) work using GetHashCode by default (afaik), so it seems reasonable to assume they're using a hash-table internally, making these operations O(n) as well, in general. What about the versions that use an IEqualityComparer? OrderBy would need a sort, so most likely we're looking at O(n log n). What if it's already sorted? How about if I say OrderBy().ThenBy() and provide the same key to both? I could see GroupBy (and Join) using either sorting, or hashing. Which is it? Contains would be O(n) on a List, but O(1) on a HashSet - does LINQ check the underlying container to see if it can speed things up? And the real question - so far, I've been taking it on faith that the operations are performant. However, can I bank on that? STL containers, for example, clearly specify the complexity of every operation. Are there any similar guarantees on LINQ performance in the .NET library specification?

    Read the article

  • Project Euler #3

    - by Alex
    Question: The prime factors of 13195 are 5, 7, 13 and 29. What is the largest prime factor of the number 600851475143? I found this one pretty easy, but running the file took an extremely long time, it's been going on for a while and the highest number I've got to is 716151937. Here is my code, am I just going to have a wait or is there an error in my code? //User made class public class Three { public static boolean checkPrime(long p) { long i; boolean prime = false; for(i = 2;i<p/2;i++) { if(p%i==0) { prime = true; break; } } return prime; } } //Note: This is a separate file public class ThreeMain { public static void main(String[] args) { long comp = 600851475143L; boolean prime; long i; for(i=2;i<comp/2;i++) { if(comp%i==0) { prime = Three.checkPrime(i); if(prime==true) { System.out.println(i); } } } } }

    Read the article

  • Unclear pricing of Windows Azure

    - by Dirk
    How do you people think about the Windows Azure pricing model and the way it is presented to the user? I just found out that Azure keeps charging hours for STOPPED instances. I just received a bill from more than 100 euro for 3 STOPPED instances (not) running "HelloAzure". I the past I also played around with Amazon Web Services. Amazon doesn't charge for stopped instances. I was wondering: "Should I have known this before, or is Microsoft doing a bad job in clear communication in the pricing model?" Quote from http://www.microsoft.com/windowsazure/pricing/ : Compute time, measured in service hours: Windows Azure compute hours are charged only for when your application is deployed. When developing and testing your application, developers will want to remove the compute instances that are not being used to minimize compute hour billing. Partial compute hours are billed as full hours. I read this, so I stopped all instances after a few hours playing around. Now it seems I should have deleted them, not just "stopped". Strictly speaking, all depends on the definition of the word "deployed". If you upload an application, but it is not running, can it still be regarded as being "deployed"? May be, but when you read this for the first time, with AWS experience in mind, I don't think it's 100% clear what this means. Technically speaking, an uploaded application only uses (read: should only use / needs only) a few MB harddrive space. It doesn't require any CPU time. If Azure wants to reserve CPU's for not running instances.. well, that's Azure's choice, not mine. I don't want to spread a hate campaign at all, but I do want to know how people think about this subject. Should Microsoft be more clear about their pricing model or do you think it's clear enough? Second question: did anyone got refunded for a similar case? Thanks in advance! UPDATE 27-01-2011 I sent an email to customer support a few days ago, but I guess that didn't reach anu human being because I didn't hear anything from it. So, I made a telephone call today with a Dutch customer support representative (I live in Holland). She totally understood the problem and she's trying to get a refund for me. However, she mentioned that "usually these refund requests are denied", but she's going to try. She also mentioned that I'm not the first one with this (or similar) problem. UPDATE 28-01-2011 I just received a phonecall from Microsoft support. The lady told me some good news: the money will refunded. However, the invoice has not been made yet, and my creditcard will first be chardged, after which it will be refunded, but hey, that's no problem for me! I'm glad the way it's solved! Thanks everybody!

    Read the article

  • Setting up a fileserver, some questions?

    - by Tanax
    Recently I've become very interested in setting up a fileserver, mostly for home usage but also because of the fact that I live in 2 places, I need to be able to access my files from both homes. I have already done some research into this but I am unclear about some things. My requirements are the following; Needs to work on both Mac and PC(only using Windows atm on PC but could be good if it supports more OS's to make it futureproof in case I need Linux or something else) Need to be able to set up a folder/drive/network space to act as a link to a certain folder on the fileserver All files should only be stored on the fileserver, e.g. no "shared" folders like in Dropbox where files are stored on the client computer Would prefer it if folders are password protected or that I can somehow specify what users can access the fileserver's shares Fileserver's OS most likely have to be Windows due to other factors outside of being just a fileserver I've already kinda figured out that I will need to set up a VPN so that I can access my fileserver from outside the local network. Probably going to use OpenVPN. Question 1: How would I go about to set up a VPN server so that I can connect to my local network at the fileserver's location? I know that since I'm on a dynamic IP I will have to get some sort of dynamic DNS server - I've already checked into this and I'm fairly sure I know how to fix that. I also know that I will have to forward the port OpenVPN uses in my router. Question 2: How would I actually share the folders on the fileserver so that I can access them on my other computers? I've researched into Samba but I'm uncertain if it needs to be run on a Linux OS. I know that the clients connecting to it can be Windows for example but can the Samba "server" be run on Windows? Also it appears that Samba shares a folder, meaning it works like Dropbox - I don't want that. So how would I share a folder in that case to make it work like I want it to? Sorry for the incredibly long question, I tried to structure it the best I could for easier read. Thanks in advance!

    Read the article

  • Setting up a fileserver, some questions?

    - by Tanax
    Recently I've become very interested in setting up a fileserver, mostly for home usage but also because of the fact that I live in 2 places, I need to be able to access my files from both homes. I have already done some research into this but I am unclear about some things. My requirements are the following; Needs to work on both Mac and PC(only using Windows atm on PC but could be good if it supports more OS's to make it futureproof in case I need Linux or something else) Need to be able to set up a folder/drive/network space to act as a link to a certain folder on the fileserver All files should only be stored on the fileserver, e.g. no "shared" folders like in Dropbox where files are stored on the client computer Would prefer it if folders are password protected or that I can somehow specify what users can access the fileserver's shares Fileserver's OS most likely have to be Windows due to other factors outside of being just a fileserver I've already kinda figured out that I will need to set up a VPN so that I can access my fileserver from outside the local network. Probably going to use OpenVPN. Question 1: How would I go about to set up a VPN server so that I can connect to my local network at the fileserver's location? I know that since I'm on a dynamic IP I will have to get some sort of dynamic DNS server - I've already checked into this and I'm fairly sure I know how to fix that. I also know that I will have to forward the port OpenVPN uses in my router. Question 2: How would I actually share the folders on the fileserver so that I can access them on my other computers? I've researched into Samba but I'm uncertain if it needs to be run on a Linux OS. I know that the clients connecting to it can be Windows for example but can the Samba "server" be run on Windows? Also it appears that Samba shares a folder, meaning it works like Dropbox - I don't want that. So how would I share a folder in that case to make it work like I want it to? Sorry for the incredibly long question, I tried to structure it the best I could for easier read. Thanks in advance!

    Read the article

  • Can't detect wireless networks, running ubuntu 9.10 on eeepc 1000he

    - by David Brown
    Hi, I am running Ubuntu 9.10 on an ASUS eeepc 1000HE, and my wireless card, RaLink RT 2860, is failing to recognize any wireless networks. It was working fine this morning, until I accidentally hit Fn F2, disabling the wireless card. I hit Fn F2 again to reenable it, but it no longer will detect any wireless networks (and other computers in the household do recognize them). Any help at all will be greatly appreciated! I googled quite a bit and talked to a human about this but could not fix it. We tried dhclient ra0, which returned There is already a pid file /var/run/dhclient.pid with pid 2438 killed old client process, removed PID file Internet Systems Consortium DHCP Client V3.1.2 Copyright 2004-2008 Internet Systems Consortium. All rights reserved. For info, please visit http://www.isc.org/sw/dhcp/ Listening on LPF/ra0/00:25:d3:13:f4:20 Sending on LPF/ra0/00:25:d3:13:f4:20 Sending on Socket/fallback DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 6 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 9 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 12 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 8 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 10 DHCPDISCOVER on ra0 to 255.255.255.255 port 67 interval 6 No DHCPOFFERS received. No working leases in persistent database - sleeping. Other data: iwconfig returns (other stuff here...) ra0 RT2860 Wireless ESSID:"" Nickname:"RT2860STA" Mode:Auto Frequency=2.412 GHz Bit Rate=1 Mb/s RTS thr:off Fragment thr:off Link Quality=10/100 Signal level:0 dBm Noise level:-87 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:0 Missed beacon:0 ifconfig returns (other stuff here...) ra0 Link encap:Ethernet HWaddr 00:25:d3:13:f4:20 inet6 addr: fe80::225:d3ff:fe13:f420/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:583 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:19 ra0:avahi Link encap:Ethernet HWaddr 00:25:d3:13:f4:20 inet addr:169.254.7.117 Bcast:169.254.255.255 Mask:255.255.0.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 Interrupt:19

    Read the article

  • Autoloading Development or Production configs (best practices)

    - by Xeoncross
    When programming sites you usually have one set of config files for the development environment and another set for the production server (or one file with both settings). I am assuming all projects should be handled by version control like git or svn. Manual file transfers (like FTP) is wrong on so many levels. How you enable/disable the correct settings (so that your system knows which ones to use) is a problem for me. Each system I work on just kind of jimmy-rigs a solution. Below are the 3 methods I know of and I am hoping that someone can submit a more elegant solutions. 1) File Based The system loads a folder structure based on the URL requested. /site.com /site.fakeTLD /lib index.php For example, if the url is http://site.com then the system loads the production config files located in the site.com folder. However, if I'm working on the site locally I visit http://site.fakeTLD to work on the local copy of the site. To setup this I edit my hosts file and add site.fakeTLD to point to my own computer (127.0.0.1/localhost) and then create a vhost in apache. So now I can work on the codebase locally and then push to the server without any trouble. The problem is that this is susceptible to a "host" injection attack. So someone loading site.com could set the host to site.fakeTLD and then the system would load my development config files instead of production. 2) Config Based The config files contain on section for development - and one for production. The problem is that each time you go to push your changes to the repo you have to edit the file to specify which set of config options should be used. $use = 'production'; //'development'; This leaves the repo open to human error should one of the developers forget to enable the right setting. 3) File System Check Based All the development machines have an extra empty file called "development.txt" or something. Each time the system loads it checks for this file - if found then it knows it is in development mode - if missing then it knows it is in production mode. Since the file is NEVER ADDED to the repo then it will never be pushed (and checked out) on the production machine. However, this just doesn't feel right and causes a slight slow down since all filesystem checks are slow.

    Read the article

  • Why is 50.22.53.71 hitting my localhost node.js in an attempt to find a php setup

    - by laggingreflex
    I just created a new app using angular-fullstack yeoman generator, edited it a bit to my liking, and ran it with grunt on my localhost, and immediately upon starting up I get this flood of requests to paths that I haven't even defined. Is this a hacking attempt? And if so, how does the hacker (human or bot) immediately know where my server is and when it came online? Note that I haven't made anything online, it's just a localhost setup and I'm merely connected to the internet. (Although my router does allow 80 port incoming.) Whois shows that the IP address belongs to a SoftLayer Technologies. Never heard of it. Express server listening on 80, in development mode GET / [200] | 127.0.0.1 (Chrome 31.0.1650) GET /w00tw00t.at.blackhats.romanian.anti-sec:) [404] | 50.22.53.71 (Other) GET /scripts/setup.php [404] | 50.22.53.71 (Other) GET /admin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /admin/pma/scripts/setup.php [404] | 50.22.53.71 (Other) GET /admin/phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /db/scripts/setup.php [404] | 50.22.53.71 (Other) GET /dbadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /myadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /mysql/scripts/setup.php [404] | 50.22.53.71 (Other) GET /mysqladmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /typo3/phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin1/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin2/scripts/setup.php [404] | 50.22.53.71 (Other) GET /pma/scripts/setup.php [404] | 50.22.53.71 (Other) GET /web/phpMyAdmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /xampp/phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /web/scripts/setup.php [404] | 50.22.53.71 (Other) GET /php-my-admin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /websql/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpmyadmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin-2/scripts/setup.php [404] | 50.22.53.71 (Other) GET /php-my-admin/scripts/setup.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin-2.5.5/index.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin-2.5.5-pl1/index.php [404] | 50.22.53.71 (Other) GET /phpMyAdmin/ [404] | 50.22.53.71 (Other) GET /phpmyadmin/ [404] | 50.22.53.71 (Other) GET /mysqladmin/ [404] | 50.22.53.71 (Other)

    Read the article

  • Logitech Optical Mouse Frozen In Middle of Windows XP Pro Screen

    - by Code Sherpa
    I have a Logitech Optical Mouse/Keyboard. I have been using them just fine with the system drivers for almost a year now. I recently updated my Kaspersky software and rebooted. Now the mouse is frozen in the middle of my screen. I am not able to login to the Windows XP Pro box that has the frozen mouse (because i can't work the mouse) but am able to remote desktop to this computer. Things I know / have tried: When I boot on the problem computer, I am able to use the keyboard, but not the mouse. I have installed the latest version of Logitech's SetPoint (with the updated drivers) on the problem computer (via remote desktop) and that didn't seem to matter. I bought new batteries for the mouse and that didn't matter. I have tried the mouse/keyboard on another computer and the mouse works just fine there. My suspicion is that the Kaspersky install has overwritten a driver of some sort. Things I have not done (and would appreciate detailed steps if you feel this is the way to go): 1) Uninstalled all the mouse drivers on the machine and reboot. Then, reinstall. Note: When I get to the Device Manager I don't see an option for Human Interface Devices (where the mouse device is). Here are my options: Computer, Disk Drives, DVD/CD-Rom drives, Floppy controllers, IDE ATA/ATAPI, Imaging devices, Network Adapters, Other devices, Ports, Processors, Sound, video, and gaming, System devices, USB controllers. Also, I should point out that Video Controller is the only thing under Other devices and it has a yellow exclamation mark. The same is true for all the items under Universal Serial Bus controllers. I think this means I have to update my BIOS but, since my mouse was working just fine without doing that, I don't think that is my problem. So, how do I get to my Mouse Device? 2) Update my BIOS. Note: As pointed out above, I don't think this matters as my mouse was working just fine under my computer's current BIOS version. Thanks for your help.

    Read the article

  • HID USB Very Strange Problem...

    - by Lasanha
    I realy hope some one can help me here cause i search all over the web and nothing comes up.. I allways used PS/2 KB and Mouse and a USB KeyPad (Genius ErgoMedia 500 Gaming Explorer) to play some games, mmorpg, fps, you named, very good whit 11 keys whit possible macros etc etc... Now it comes the problem, i have a USB mouse that have 4 extra buttons, and i need more button, i love buttons.. Well, i plug in the USB mouse and disconectes de PS/2. Everything is ok until i toutch the mouse. If i do so, the ErgoMedia goes off, then on, then i mouse the mouse or press a button and all over again. Yesterday i went buying a new mouse that i liked, a USB mouse too (NPlay whit macros and all that stuff 3600dpi...) Hoping the problem was only whit the other mouse, but no.. It does the exact same thing, ErgoMedia keeps disconecting and conecting everytime i toutch the mouse. What i allready did: Update drivers of both mouses Update drivers of ErgoMedia (no specific drivers(Windows based)) Update drivers of MB Chipset (Actualy no, cause it was up to date allready) Trying other USB Ports (4 Ports back, 4 Ports Front and even 1 Port in 16 card slot device) Disable the "Allows Windows to shut down the energy bla bla" thing in Device Setings. Look up in the Device Setings only apear a problem on the ergomedia (Human interface Device) when i move the damm mouse.. Using Everest to read behavier, everything normal, exepts the disconecting thing, but no errors. Not a power suply, only the ErgoMedia and the mouse are in the USBs, and i allready disable the 16 card reader whit one usb slot to see.. Clean the IRQ registry. Look the entire internet for a fix solution. Help others problems wile looking for a fix for me (Im not a pro but not a completly stupid) Talking to you beggin you to help me as a last resorce... Machine: Acer M3641 Core2Quad 64x Based OS Vista 64b 4GbRAM HD Audio and Graphics I realy hope some one out there knows a fix for this, maybe it´s a simple thing, so simple that i´m to stupid to see that.. Sorry for my bad inglish but i write lot of erros even in my language. Any help will be very welcome. Tanks for ur concern and atention ^^

    Read the article

  • svchost.exe crash on wake up

    - by Serge
    Lately whenever I wake up my laptop from sleep I get a series of errors (generated by a host process failing) I haven't been able to figure out why this happens but I know which host process fails and was wondering if someone had some insight on why this keeps occuring 99% of the time when my laptop wakes up. here's the host process error Faulting application svchost.exe_SysMain, version 6.0.6001.18000, time stamp 0x47919291, faulting module ntdll.dll, version 6.0.6002.18005, time stamp 0x49e0421d, exception code 0xc0000006, fault offset 0x000000000005a02d, process id 0x1738, application start time 0x01cae656279b1010. and here are some services that fail because of that host The Windows Audio Endpoint Builder service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service. The Wired AutoConfig service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 0 milliseconds: Restart the service. The ReadyBoost service terminated unexpectedly. It has done this 2 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service. The Human Interface Device Access service terminated unexpectedly. It has done this 1 time(s). The following corrective action will be taken in 120000 milliseconds: Restart the service. The Network Connections service terminated unexpectedly. It has done this 2 time(s). The following corrective action will be taken in 100 milliseconds: Restart the service. The Program Compatibility Assistant Service service terminated unexpectedly. It has done this 2 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service. The Superfetch service terminated unexpectedly. It has done this 2 time(s). The following corrective action will be taken in 60000 milliseconds: Restart the service. Anyways I think you get the point, there are a few more. It got really annoying to wait for those services to restart so I created a batch file that does it automatically whenever the wlan stops I'm using Vista x64 on a Studio XPS 1640

    Read the article

< Previous Page | 71 72 73 74 75 76 77 78 79 80 81 82  | Next Page >