Search Results

Search found 6850 results on 274 pages for 'boost random'.

Page 160/274 | < Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >

  • ASP.NET High CPU Bringing Servers to their Knees

    - by user880954
    Ok, our new build is having 100% cpu spikes on each server at random intervals. For long durations it make the site totally unresponsive - this will be at peak times as people in different countries log on to the site etc. We've looked at perfmom, memory profilers, CLR profiler, sql profilers, Red gate ants profiler, tried load testing in UAT - but cannot even reproduce the problem. This could mean only thousands of users hitting the live site causes it to happen. One pattern we did notice was that the new code - the broken build - actually uses noticably less threads. We are also using spring for IOC - does this have a bed reputation? To make things worse, we cannot deploy to live due to the business impact - so cannot narrow the problem down to subset of the new features we've added. We truly are destroyed - has anyone got any battle scars that may save us a few lives?

    Read the article

  • HD video editing system with Truecrypt

    - by Rob
    I'm looking to do hi-def video editing and transcoding on an unencrypted standard partition, with Truecrypt on the system partition for sensitive data. I'm aiming to keep certain data private but still have performance where needed. Goals: Maximum, unimpacted, performance possible for hi-def video editing, encryption of video not required Encrypt system partition, using Truecrypt, for web/email privacy, etc. in the event of loss In other words I want to selectively encrypt the hard drive - i.e. make the system partition encrypted but not impact the original maximum performance that would be available to me for hi-def/HD video editing. The thinking is to use an unencrypted partition for the video and set up video applications to point at that. Assuming that they would use that partition only for their workspace and not the encrypted system partition, then I should expect to not get any performance hit. Would I be correct? I guess it might depend on the application, if that app is hard-wired to use the system partition always for temporary storage during editing and transcoding, or if it has to be installed on the C: system partition always. So some real data on how various apps behave in the respect would be useful, e.g. Adobe, Cyberlink, Nero etc. etc. I have a Intel i7 Quad-core (8 threads) 1.6Ghz (up to 2.8Ghz turbo-boost) 4Gb, 7200rpm SATA, nvidia HP laptop. I've read the excellent posting about the general performance impact of truecrypt but the benchmarks weren't specific enough for my needs where I'm dealing with HD-video and using a non-encrypted partition to maintain max performance.

    Read the article

  • OOM-Killer called every now and then..

    - by SpyrosP
    Hello there, i have a dedicated server where i've installed apache2, as well as Rails Passenger. Although i have 2GBs of RAM and most times about 1,5GB is free, there are some random times when i lose ssh and generic connectivity because oom-killer is killing processes. I suppose there is a memory leak but i cannot find out where it comes from. oom-killer kills apache2, mysql, passenger, whatever. Yesterday, i did a "cat syslog | grep -c oom-killer" and got 57 occurences ! It seems that something seriously destroys the memory. Once i reboot, everything comes back to normal. I suspect that it can be related to Passenger, but i'm still trying to figure it out. Can you think of anything else, or do you have anything to suggest that will make the leak identification procedure easier ? (i was even thinking of writing a bash script, to be run with cron for like every 5 minutes).

    Read the article

  • Apache randomly timing out

    - by Zaid
    I've been wrestling with this problem for few days now. Apache works fine. Then suddenly starts timing out. There is nothing in the error log. Few more things: - I've gone so far as to reinstall the box. - The codebase has not been touched in months. - I've done the speech test so I know it's not a bandwidth overload problem - Restart apache does not necessarily fix the issue, even temporarily (only thing that does is random attempts) If you can guide me to tools that can help me figure this out or if you know any specifics I should see, appreciate it.

    Read the article

  • What would cause previously sent emails to be sent again from Exchange the first time someone logs i

    - by Ken Pespisa
    A few users testing our Citrix XenApp service found that several (seemingly random) previously sent emails were sent out immediately after they logged into Outlook via Citrix for the first time. The problem hasn't occurred for them since. After one user had this happen, and we scratched our heads about it thinking it was a fluke, our IT director had this same issue. I guess I'd rule out any PEBKAC issues. I really don't know where to begin troubleshooting this problem. If you have any ideas what could have caused this, I'd appreciate you sharing them, as strange or far-fetched as they may seem :)

    Read the article

  • Snow Leopard, PHP, and MySQL

    - by Peter
    I have just installed Snow Leopard and now my PHP/MySQL program ends in a "Segmentation fault". I have been searching the web for a solution, I realize that there are some issues with SL/PHP/MySQL, but I have not found anything that works yet. I downloaded the binary MySQL package mysql-5.1.42-osx10.6-x86_64. I have updated the php.ini file as suggested at various posts. When I run PHP and connect to the MySQL server the behavior is a bit random. In many cases it works fine to connect and read data. In my specific case the PHP program constructs a LOAD DATA LOCAL INFILE ... statement to load data from a text file. It should do several such queries after each other in a loop. It works one time but the halts in a "Segmentation fault". The program worked fine in Leopard, but not now. My versions are: OS 10.6.2, PHP 5.3.0, MySQL 5.1.42

    Read the article

  • Ubuntu: Take actions when system temperature gets too high

    - by Josh
    One of the CPU fans on my Compaq Presario laptop running Ubuntu 9.10 seems to have bit the dust. The fan is deep within the case and I intend to replace the laptop in the next 6 months so it's not worth replacing it. I have the laptop on a cooling pad and most of the time the system is fine, CPU temps around 90°-110°F. Occasionally, however, I'm seeing random lockups which I believe is due to the system overheating. How can I configure the system to: Lower the CPU speed when the temperature reaches a certain level? (I.E. 110°F) Shutdown the system when the tempature reaches a critical level? (And what would that be? 130°?)

    Read the article

  • How to rename folder? Folder in use [closed]

    - by Colonel Panic
    Possible Duplicate: Deleting folders give access denied error message on Windows 7, although I am administrator I am trying to rename a folder, but apparently it is in use. I don't think it is. I'd like to rename my folder. How can I do that? This happens regularly, and at random, particularly in my music folder. I'd like to know the cause and fix that. Rather than workaround with some special tool whenever I want to rename a file.

    Read the article

  • Interpreting Munin graphs showing available entropy and MySQL slow queries in sync

    - by user64204
    We're experiencing performance issues on our website, and after reviewing our munin graphs, the only metrics we've found in sync are Available entropy and MySQL slow queries, with the latter influenced by our number of logged in users: Based on the wikipedia entropy page, my understanding is that entropy is the amount of randomness (here measured in bytes) that the system can use for various tasks, mainly cryptography and functions that require random input. Since the peaks in available entropy and MySQL slow queries are occurring in sync and at regular interval, that the number of MySQL slow queries is proportional to our number of Drupal users whereas the peaks in available entropy seem to be much more constant and less proportional to these 2 metrics, we're thinking available entropy is the reflect of a root cause which, combined with the traffic to our website, is causing those slow queries (and not the opposite, slow queries influencing the entropy). Accordingly: Q: What underlying problem do you think could cause regular peaks in available entropy that could have an influence on MySQL's ability to process queries?

    Read the article

  • Getting Dell E6320 with I7 to work with 3 monitors at 1920x1080p x 3

    - by MadBoy
    I want to buy Dell E6320 which comes with Intel Core I7-2620M (2.70GHz, 4MB cache, Dual Core) with Intel HD Graphics 3000. Laptop will come with docking station. I want to connect 3 monitors to that docking station so that working at home would give me some additional boost. Docking station will allow me to connect only 2 monitors so I'm looking at following other options: Matrox TRIPLEHEAD2GO DIGITAL Edition or TRIPLEHEAD2GO DP Edition. But reading Matrox Support Page intel GPU can't run the highest resolution with 3 monitors connected, it even gets worse since it seems monitors would have to be able to work at 50hz. Also I'm not sure but it seems that Matrox doesn't split the monitors as 3 separate monitors but simply as one big space (which is a bit opposite to what I need) Buy 2 or maybe just 1 USB based monitor but it would also mean having 1 or 2 different monitors then the main one, unless I buy 3 USB based monitors which would mean more money to spend. Also I found only couple of models and most of them require USB 3.0 and no other cables to plug in (nice but costly - couldn't find decent monitor with only USB for sending signal and having power connected normally) . But docking station has only one USB 3.0 port. Can I use hub and still get it to work? Find some converters from Digital to USB (I think DisplayLink does some?) Buy different laptop but what kind? I need it to be I7, small (13"), fast and lightweight. At same time it requires docking station that I can use at home to connect 3 external monitors. Some other suggested solution... Edit: I need 3 monitors for work in terms of coding in Visual Studio or having word/excel/outlook open. Nothing fancy. Maybe some movie once in a while.

    Read the article

  • Screen flicker -> Severe System Slowdown?

    - by Adam Robinson
    I'm using a Dell D830 laptop, and over the last few weeks it's been developing a very irritating screen flicker problem that leads to the system slowing down almost to the point of unusability. At seemingly random times (no commonality between how long the system has been running, what I was doing, what applications were open, etc.) my screen (I use two external LCD's with the laptop closed in a dock) flickers for a moment, then the system becomes incredibly slow. The screen redraws painfully slowly--almost like what you might expect to see with generic graphics drivers installed--and the entire system is maddeningly unresponsive. The only thing that seems to be able to correct the issue is a restart. I've checked the event logs and nothing out of the ordinary is there, and definitely nothing that's common to all of the events. I'm running XP Pro SP2. Any ideas?

    Read the article

  • losetup does not decrypt device in Ubuntu 11.4 as before

    - by Kay
    I had an external volume mounted using losetup for about two years. It was created using Ubuntu 9.4 and I used the same Ubuntu installation throughout all dist upgrades. Now as I bought a new laptop I set up a fresh Ubuntu 11.4 installation on it. Problem is: losetup -e twofish /dev/loop0 /dev/sdb2 does not decrypt the volume anymore. The data in /dev/loop0 contains apparently random data. I am sure I entered the correct password. I modprobe'd cryptoloop and twofish. My question is: Has Canonical done some obscure changes to losetup like adding a salt? Does losetup depend on configuration files I did not know about? How can I decrypt the volume on my now laptop?

    Read the article

  • Hard crash when drawing content for CALayer using quartz

    - by Lukasz
    I am trying to figure out why iOS crash my application in the harsh way (no crash logs, immediate shudown with black screen of death with spinner shown for a while). It happens when I render content for CALayer using Quartz. I suspected the memory issue (happens only when testing on the device), but memory logs, as well as instruments allocation logs looks quite OK. Let me past in the fatal function: - (void)renderTiles{ if (rendering) { //NSLog(@"====== RENDERING TILES SKIP ======="); return; } rendering = YES; CGRect b = tileLayer.bounds; CGSize s = b.size; CGFloat imageScale = [[UIScreen mainScreen] scale]; s.height *= imageScale; s.width *= imageScale; dispatch_async(queue, ^{ NSLog(@""); NSLog(@"====== RENDERING TILES START ======="); NSLog(@"1. Before creating context"); report_memory(); CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); NSLog(@"2. After creating color space"); report_memory(); NSLog(@"3. About to create context with size: %@", NSStringFromCGSize(s)); CGContextRef ctx = CGBitmapContextCreate(NULL, s.width, s.height, 8, 0, colorSpace, kCGImageAlphaPremultipliedLast); NSLog(@"4. After creating context"); report_memory(); CGAffineTransform flipTransform = CGAffineTransformMake(1.0, 0.0, 0.0, -1.0, 0.0, s.height); CGContextConcatCTM(ctx, flipTransform); CGRect tileRect = CGRectMake(0, 0, tileImageScaledSize.width, tileImageScaledSize.height); CGContextDrawTiledImage(ctx, tileRect, tileCGImageScaled); NSLog(@"5. Before creating cgimage from context"); report_memory(); CGImageRef cgImage = CGBitmapContextCreateImage(ctx); NSLog(@"6. After creating cgimage from context"); report_memory(); dispatch_sync(dispatch_get_main_queue(), ^{ tileLayer.contents = (id)cgImage; }); NSLog(@"7. After asgning tile layer contents = cgimage"); report_memory(); CGColorSpaceRelease(colorSpace); CGContextRelease(ctx); CGImageRelease(cgImage); NSLog(@"8. After releasing image and context context"); report_memory(); NSLog(@"====== RENDERING TILES END ======="); NSLog(@""); rendering = NO; }); } Here are the logs: ====== RENDERING TILES START ======= 1. Before creating context Memory in use (in bytes): 28340224 / 519442432 (5.5%) 2. After creating color space Memory in use (in bytes): 28340224 / 519442432 (5.5%) 3. About to create context with size: {6324, 5208} 4. After creating context Memory in use (in bytes): 28344320 / 651268096 (4.4%) 5. Before creating cgimage from context Memory in use (in bytes): 153649152 / 651333632 (23.6%) 6. After creating cgimage from context Memory in use (in bytes): 153649152 / 783159296 (19.6%) 7. After asgning tile layer contents = cgimage Memory in use (in bytes): 153653248 / 783253504 (19.6%) 8. After releasing image and context context Memory in use (in bytes): 21688320 / 651288576 (3.3%) ====== RENDERING TILES END ======= Application crashes in random places. Sometimes when reaching en of the function and sometime in random step. Which direction should I look for a solution? Is is possible that GDC is causing the problem? Or maybe the context size or some Core Animation underlying references?

    Read the article

  • Increasing speed of python code

    - by Curious2learn
    Hi, I have some python code that has many classes. I used cProfile to find that the total time to run the program is 68 seconds. I found that the following function in a class called Buyers takes about 60 seconds of those 68 seconds. I have to run the program about 100 times, so any increase in speed will help. Can you suggest ways to increase the speed by modifying the code? If you need more information that will help, please let me know. def qtyDemanded(self, timePd, priceVector): '''Returns quantity demanded in period timePd. In addition, also updates the list of customers and non-customers. Inputs: timePd and priceVector Output: count of people for whom priceVector[-1] < utility ''' ## Initialize count of customers to zero ## Set self.customers and self.nonCustomers to empty lists price = priceVector[-1] count = 0 self.customers = [] self.nonCustomers = [] for person in self.people: if person.utility >= price: person.customer = 1 self.customers.append(person) else: person.customer = 0 self.nonCustomers.append(person) return len(self.customers) self.people is a list of person objects. Each person has customer and utility as its attributes. EDIT - responsed added ------------------------------------- Thanks so much for the suggestions. Here is the response to some questions and suggestions people have kindly made. I have not tried them all, but will try others and write back later. (1) @amber - the function is accessed 80,000 times. (2) @gnibbler and others - self.people is a list of Person objects in memory. Not connected to a database. (3) @Hugh Bothwell cumtime taken by the original function - 60.8 s (accessed 80000 times) cumtime taken by the new function with local function aliases as suggested - 56.4 s (accessed 80000 times) (4) @rotoglup and @Martin Thomas I have not tried your solutions yet. I need to check the rest of the code to see the places where I use self.customers before I can make the change of not appending the customers to self.customers list. But I will try this and write back. (5) @TryPyPy - thanks for your kind offer to check the code. Let me first read a little on the suggestions you have made to see if those will be feasible to use. EDIT 2 Some suggested that since I am flagging the customers and noncustomers in the self.people, I should try without creating separate lists of self.customers and self.noncustomers using append. Instead, I should loop over the self.people to find the number of customers. I tried the following code and timed both functions below f_w_append and f_wo_append. I did find that the latter takes less time, but it is still 96% of the time taken by the former. That is, it is a very small increase in the speed. @TryPyPy - The following piece of code is complete enough to check the bottleneck function, in case your offer is still there to check it with other compilers. Thanks again to everyone who replied. import numpy class person(object): def __init__(self, util): self.utility = util self.customer = 0 class population(object): def __init__(self, numpeople): self.people = [] self.cus = [] self.noncus = [] numpy.random.seed(1) utils = numpy.random.uniform(0, 300, numpeople) for u in utils: per = person(u) self.people.append(per) popn = population(300) def f_w_append(): '''Function with append''' P = 75 cus = [] noncus = [] for per in popn.people: if per.utility >= P: per.customer = 1 cus.append(per) else: per.customer = 0 noncus.append(per) return len(cus) def f_wo_append(): '''Function without append''' P = 75 for per in popn.people: if per.utility >= P: per.customer = 1 else: per.customer = 0 numcustomers = 0 for per in popn.people: if per.customer == 1: numcustomers += 1 return numcustomers

    Read the article

  • Logging in with a different password than the database password, PHPMyAdmin

    - by Andrew M
    I am trying to install PHPMyAdmin on my server to manage my MySQL databases. Right now I have only one I want to add, but I would like to be able to manage multiple databases from the same account on PHPMyAdmin. How would I configure PMA so I could login with "andrew" and a password of "examplepassword" instead of the annoyingly long and unchangeable database user and password I am provided (ie. db3483478234, password of random characters)? I can't seem to find an area to specify a different password than the regular database username and password.

    Read the article

  • parallel computation for an Iterator of elements in Java

    - by Brian Harris
    I've had the same need a few times now and wanted to get other thoughts on the right way to structure a solution. The need is to perform some operation on many elements on many threads without needing to have all elements in memory at once, just the ones under computation. As in, Iterables.partition is insufficient because it brings all elements into memory up front. Expressing it in code, I want to write a BulkCalc2 that does the same thing as BulkCalc1, just in parallel. Below is sample code that illustrates my best attempt. I'm not satisfied because it's big and ugly, but it does seem to accomplish my goals of keeping threads highly utilized until the work is done, propagating any exceptions during computation, and not having more than numThreads instances of BigThing necessarily in memory at once. I'll accept the answer which meets the stated goals in the most concise way, whether it's a way to improve my BulkCalc2 or a completely different solution. interface BigThing { int getId(); String getString(); } class Calc { // somewhat expensive computation double calc(BigThing bigThing) { Random r = new Random(bigThing.getString().hashCode()); double d = 0; for (int i = 0; i < 100000; i++) { d += r.nextDouble(); } return d; } } class BulkCalc1 { final Calc calc; public BulkCalc1(Calc calc) { this.calc = calc; } public TreeMap<Integer, Double> calc(Iterator<BigThing> in) { TreeMap<Integer, Double> results = Maps.newTreeMap(); while (in.hasNext()) { BigThing o = in.next(); results.put(o.getId(), calc.calc(o)); } return results; } } class SafeIterator<T> { final Iterator<T> in; SafeIterator(Iterator<T> in) { this.in = in; } synchronized T nextOrNull() { if (in.hasNext()) { return in.next(); } return null; } } class BulkCalc2 { final Calc calc; final int numThreads; public BulkCalc2(Calc calc, int numThreads) { this.calc = calc; this.numThreads = numThreads; } public TreeMap<Integer, Double> calc(Iterator<BigThing> in) { ExecutorService e = Executors.newFixedThreadPool(numThreads); List<Future<?>> futures = Lists.newLinkedList(); final Map<Integer, Double> results = new MapMaker().concurrencyLevel(numThreads).makeMap(); final SafeIterator<BigThing> it = new SafeIterator<BigThing>(in); for (int i = 0; i < numThreads; i++) { futures.add(e.submit(new Runnable() { @Override public void run() { while (true) { BigThing o = it.nextOrNull(); if (o == null) { return; } results.put(o.getId(), calc.calc(o)); } } })); } e.shutdown(); for (Future<?> future : futures) { try { future.get(); } catch (InterruptedException ex) { // swallowing is OK } catch (ExecutionException ex) { throw Throwables.propagate(ex.getCause()); } } return new TreeMap<Integer, Double>(results); } }

    Read the article

  • what's wrong with my producer-consumer queue design?

    - by toasteroven
    I'm starting with the C# code example here. I'm trying to adapt it for a couple reasons: 1) in my scenario, all tasks will be put in the queue up-front before consumers will start, and 2) I wanted to abstract the worker into a separate class instead of having raw Thread members within the WorkerQueue class. My queue doesn't seem to dispose of itself though, it just hangs, and when I break in Visual Studio it's stuck on the _th.Join() line for WorkerThread #1. Also, is there a better way to organize this? Something about exposing the WaitOne() and Join() methods seems wrong, but I couldn't think of an appropriate way to let the WorkerThread interact with the queue. Also, an aside - if I call q.Start(#) at the top of the using block, only some of the threads every kick in (e.g. threads 1, 2, and 8 process every task). Why is this? Is it a race condition of some sort, or am I doing something wrong? using System; using System.Collections.Generic; using System.Text; using System.Messaging; using System.Threading; using System.Linq; namespace QueueTest { class Program { static void Main(string[] args) { using (WorkQueue q = new WorkQueue()) { q.Finished += new Action(delegate { Console.WriteLine("All jobs finished"); }); Random r = new Random(); foreach (int i in Enumerable.Range(1, 10)) q.Enqueue(r.Next(100, 500)); Console.WriteLine("All jobs queued"); q.Start(8); } } } class WorkQueue : IDisposable { private Queue _jobs = new Queue(); private int _job_count; private EventWaitHandle _wh = new AutoResetEvent(false); private object _lock = new object(); private List _th; public event Action Finished; public WorkQueue() { } public void Start(int num_threads) { _job_count = _jobs.Count; _th = new List(num_threads); foreach (int i in Enumerable.Range(1, num_threads)) { _th.Add(new WorkerThread(i, this)); _th[_th.Count - 1].JobFinished += new Action(WorkQueue_JobFinished); } } void WorkQueue_JobFinished(int obj) { lock (_lock) { _job_count--; if (_job_count == 0 && Finished != null) Finished(); } } public void Enqueue(int job) { lock (_lock) _jobs.Enqueue(job); _wh.Set(); } public void Dispose() { Enqueue(Int32.MinValue); _th.ForEach(th = th.Join()); _wh.Close(); } public int GetNextJob() { lock (_lock) { if (_jobs.Count 0) return _jobs.Dequeue(); else return Int32.MinValue; } } public void WaitOne() { _wh.WaitOne(); } } class WorkerThread { private Thread _th; private WorkQueue _q; private int _i; public event Action JobFinished; public WorkerThread(int i, WorkQueue q) { _i = i; _q = q; _th = new Thread(DoWork); _th.Start(); } public void Join() { _th.Join(); } private void DoWork() { while (true) { int job = _q.GetNextJob(); if (job != Int32.MinValue) { Console.WriteLine("Thread {0} Got job {1}", _i, job); Thread.Sleep(job * 10); // in reality would to actual work here if (JobFinished != null) JobFinished(job); } else { Console.WriteLine("Thread {0} no job available", _i); _q.WaitOne(); } } } } }

    Read the article

  • File Upload drops with no reason

    - by sufoid
    Hallo I want to make an file upload. The script should take the image, resize it and upload it. But it seems that there is any unknown to me error in the upload. Here the code define ("MAX_SIZE","2000"); // maximum size for uploaded images define ("WIDTH","107"); // width of thumbnail define ("HEIGHT","107"); // alternative height of thumbnail (portrait 107x80) define ("WIDTH2","600"); // width of (compressed) photo define ("HEIGHT2","600"); // alternative height of (compressed) photo (portrait 600x450) if (isset($_POST['Submit'])) { // iterate thorugh all upload fields foreach ($_FILES as $key => $value) { //read name of user-file $image = $_FILES[$key]['name']; // if it is not empty if ($image) { $filename = stripslashes($_FILES[$key]['name']); // get original name of file from clients machine $extension = getExtension($filename); // get extension of file in lower case format $extension = strtolower($extension); // if extension not known, output error // otherwise continue if (($extension != "jpg") && ($extension != "jpeg") && ($extension != "png") && ($extension != "gif")) { echo '<div class="failure">Fehler bei Datei '. $_FILES[$key]['name'] .': Unbekannter Dateityp: Es können nur Dateien vom Typ .gif, .jpg oder .png hochgeladen werden.</div>'; } else { // get size of image in bytes // $_FILES[\'image\'][\'tmp_name\'] >> temporary filename of file in which the uploaded file was stored on server $size = getimagesize($_FILES[$key]['tmp_name']); $sizekb = filesize($_FILES[$key]['tmp_name']); // if image size exceeds defined maximum size, output error // otherwise continue if ($sizekb > MAX_SIZE*1024) { echo '<div class="failure">Fehler bei Datei '. $_FILES[$key]['name'] .': Die Datei konnte nicht hochgeladen werden: die Dateigröße überschreitet das Limit von 2MB.</div>'; } else { $rand = md5(rand() * time()); // create random file name $image_name = $rand.'.'.$extension; // unique name (random number) // new name contains full path of storage location (images folder) $consname = "photos/".$image_name; // path to big image $consname2 = "photos/thumbs/".$image_name; // path to thumbnail $copied = copy($_FILES[$key]['tmp_name'], $consname); $copied = copy($_FILES[$key]['tmp_name'], $consname2); $sql="INSERT INTO photos (galery_id, photo, thumb) VALUES (". $id .", '$consname', '$consname2')" or die(mysql_error()); $query = mysql_query($sql) or die(mysql_error()); // if image hasnt been uploaded successfully, output error // otherwise continue if (!$copied) { echo '<div class="failure">Fehler bei Datei '. $_FILES[$key]['name'] .': Die Datei konnte nicht hochgeladen werden.</div>'; } else { $thumb_name = $consname2; // path for thumbnail for creation & storage // call to function: create thumbnail // parameters: image name, thumbnail name, specified width and height $thumb = make_thumb($consname,$thumb_name,WIDTH,HEIGHT); $thumb = make_thumb($consname,$consname,WIDTH2,HEIGHT2); } } } } } // current image could be uploaded successfully echo '<div class="success">'. $success .' Foto(s) erfolgreich hochgeladen!</div>'; showForm(); // call to function: create upload form }

    Read the article

  • Windows 7 Won't Boot

    - by Vie
    I recently built a new computer, my fifth one. ASUS Maximus III Formula LGA 1156 Intel P55 ATX Motherboard EVGA 01G-P3-1452-TR GeForce GTS 450 Superclocked 1GB 128-bit GDDR5 PCI Express 2.0 x16 HDCP Video Card COOLMAX RM-1000B 1000W ATX psu Intel Core i7-875K lynnfield 2.93GHz LGA 1156 95w Quad-Core unlocked processor G.SKILL Ripjaws Series 16 (4x4GB) 240-Pin DDR3 SDRAM DDR3 1333 (PC3 10666) memory WD VelociRaptor WD3000GLFS 300gb 10000 RPM SATA 3.0Gb/s Hard Drive Sony Optiarc CD/DVD Burner model AD-7261S-0B LightScribe Windows 7 Home Premium 64-bit It gets hung up on the starting windows screen. When I went to install the OS it did the same thing wouldn't go past the windows logo, so I put the new HDD into my old computer and installed windows 7 thinking it was just an installer error. Put the fully installed HDD back into my new machine and it still gets stuck on the starting windows screen. I've tried most everything I've looked up. Disabled USB, Disabled Turbo Boost, Disabled everything that wasn't essential(just about every configuration I can think of), took it apart and put it back together, took all the ram out save one 4g stick(wouldn't even boot when I did this), did a memory scan which came back successful, I don't know what could be wrong. Only thing I can think of is a compatibility issue somewhere, but I've ran over it again and again and I don't know where there would be an issue like that. Need Backup! .<

    Read the article

  • Can I configure mod_proxy to use different parameters based on HTTP Method?

    - by Graham Lea
    I'm using mod_proxy as a failover proxy with two balance members. While mod_proxy marks dead nodes as dead, it still routes one request per minute to each dead node and, if it's still dead, will either return 503 to the client (if maxattempts=0) or retry on another node (if it's 0). The backends are serving a REST web service. Currently I have set maxattempts=0 because I don't want to retry POSTs and DELETEs. This means that when one node is dead, each minute a random client will receive a 503. Unfortunately, most of our clients are interpreting codes like 503 as "everything is dead" rather than "that didnt work but please try that again". In order to program some kind of automatic retry for safe requests at the proxy layer, I'd like to configure mod_proxy to use maxattempts=1 for GET and HEAD requests and maxattempts=0 for all other HTTP Methods. Is this possible? (And how? :)

    Read the article

  • IE Sessions in Terminal Server

    - by Jacob
    Currently we are using WADE middleware for our order processing operation. We have about 40 operators that use 1 terminal server to open IE 8 to access the WADE middleware. To me, it's random, but every now and then someone will come to me and tell me that IE has a "Page cannot be displayed" or "HTTP Error 500" error. I did a bit of testing on my local machine and I never get this error while doing normal operations. Although, when I open one session with username "test" and then login to the wade admin console as admin, I run into problems. I do not run into problems until I logout of the wade admin. Once I logout of the Wade admin, my "test" session says "page cannot be displayed". This makes me think the IE user sessions on the terminal service are cross talking. Does anyone have any possible settings I can change in IE or do you think this is an issue with the middleware? The Terminal Server is Windows 2003, btw.

    Read the article

  • Restrict a port to a single app

    - by viraptor
    I'd like to restrict a range of udp ports to a single application (or a user). What I'd like to achieve is not simply blocking a bind() from other uids, but also remove the range from a pool that can be auto-assigned. For example, if someone tries to explicitly bind 12345, but doesn't run the specified app, they should get EPERM. If someone tries to bind an unspecified port, they should never try to bind 12345 at random. Is there any system that can help here? I tried browsing apparmor / selinux docs, but they seem to do the blocking part only.

    Read the article

  • What port does OpenLink ODBC Driver use?

    - by user36737
    I use Avaya Reporting Services and OpenLink ODBC Drivers for db connection. I know that it uses port 5000 for handshaking but after that I believe it uses an random port for communication. I want to deploy my application and it will communicate with the client's system in their datacenter. They are asking what ports should they open on their firewalls. I can't obviously give them a range above 50,000 that I know OpenLink ODBC Drivers use. Can someone tell me what port should I tell my client to open?

    Read the article

  • mediaplayer failure exception

    - by Rahulkapil
    I am working on an android application in which i have to play random sounds from my assets folder. there are some images also, when i click on any image from those images a sound must play regarding to that image from assets folder. i managed all but sometime my mediaplayer fails unexpectedly. I am attaching my code also. private Handler threadHandler = new Handler() { public void handleMessage(android.os.Message msg) { /*first*/ try{ InputStream ims1 = getAssets().open("images/" +dataAll_pic_name1); d1 = Drawable.createFromStream(ims1, null); rl1.setVisibility(View.VISIBLE); img1.setImageDrawable(d1); AssetFileDescriptor afd = getAssets().openFd("sounds/" + str_snd1); mp2 = new MediaPlayer(); mp2.setDataSource(afd.getFileDescriptor(),afd.getStartOffset(),afd.getLength()); mp2.prepare(); mp2.start(); mp2.setOnCompletionListener(new OnCompletionListener() { @Override public void onCompletion(MediaPlayer mp) { /*second*/ try{ InputStream ims2 = getAssets().open("images/" +dataAll_pic_name2); d2 = Drawable.createFromStream(ims2, null); rl2.setVisibility(View.VISIBLE); img2.setImageDrawable(d2); AssetFileDescriptor afd = getAssets().openFd("sounds/" + str_snd2); mp2 = new MediaPlayer(); mp2.setDataSource(afd.getFileDescriptor(),afd.getStartOffset(),afd.getLength()); mp2.prepare(); mp2.start(); mp2.setOnCompletionListener(new OnCompletionListener() { @Override public void onCompletion(MediaPlayer mp) { /*third*/ try{ InputStream ims3 = getAssets().open("images/" +dataAll_pic_name3); d3 = Drawable.createFromStream(ims3, null); rl3.setVisibility(View.VISIBLE); img3.setImageDrawable(d3); AssetFileDescriptor afd = getAssets().openFd("sounds/" + str_snd3); mp2 = new MediaPlayer(); mp2.setDataSource(afd.getFileDescriptor(),afd.getStartOffset(),afd.getLength()); mp2.prepare(); mp2.start(); mp2.setOnCompletionListener(new OnCompletionListener() { @Override public void onCompletion(MediaPlayer mp) { /*four*/ try{ InputStream ims4 = getAssets().open("images/" +dataAll_pic_name4); d4 = Drawable.createFromStream(ims4, null); rl4.setVisibility(View.VISIBLE); img4.setImageDrawable(d4); AssetFileDescriptor afd = getAssets().openFd("sounds/" + str_snd4); mp2 = new MediaPlayer(); mp2.setDataSource(afd.getFileDescriptor(),afd.getStartOffset(),afd.getLength()); mp2.prepare(); mp2.start(); mp2.setOnCompletionListener(new OnCompletionListener() { @Override public void onCompletion(MediaPlayer mp) { startAnimation(); //randomSoundPlay(); timer.schedule( new TimerTask(){ public void run() { System.out.println("Wait, what........................:"); try{ AssetFileDescriptor afd = getAssets().openFd("sounds/" + dataAll_sound_name); mp2 = new MediaPlayer(); mp2.setDataSource(afd.getFileDescriptor(),afd.getStartOffset(),afd.getLength()); mp2.prepare(); mp2.start(); mp2.setOnCompletionListener(new OnCompletionListener() { @Override public void onCompletion(MediaPlayer mp) { vg1.setClickable(true); vg2.setClickable(true); vg3.setClickable(true); vg4.setClickable(true); btn_spkr.setVisibility(View.VISIBLE); txtImage(); } }); }catch(Exception e){ e.printStackTrace(); } } }, delay_que); } }); }catch(Exception e){ e.printStackTrace(); } } }); }catch(Exception e){ e.printStackTrace(); } } }); }catch(Exception e){ e.printStackTrace(); } } }); }catch(Exception e){ e.printStackTrace(); } } }; in above code random images and sound sets in my activity. now when i click on any image a sound must play but sometimes it fails.. i tried but unable to resolve this issue. help me out. thanks in advance.

    Read the article

  • Snake game, can't add new rectangles?

    - by user1814358
    I had seen some problem on other forum, and i tried to solve them but unsuccesfully, now is too intersting for me and i just need to solve this for my ego... lol Main problem is when snake eat first food, snake added one rectangle, when snake eat second food nothings changed? When snake head position and food position are same, then i try to add another rectangle, i have list where i stored rectangles coordinates x and y. What do you think, how i can to solve this problem? I think that i'm so close, but my brain has stopped. :( code: using System; using System.Collections.Generic; using System.ComponentModel; using System.Data; using System.Drawing; using System.Linq; using System.Text; using System.Windows.Forms; namespace Zmijice { enum Kontrole { Desno,Levo,Gore,Dole,Stoj } public partial class Zmijice : Form { int koox; int kooy; private int x; private int y; private int c; private int b; private const int visina=20; //width private const int sirina=20; //height //private int m = 20; //private int n = 20; private List<int> SegmentX = new List<int>(); private List<int> SegmentY = new List<int>(); Random rnd = new Random(); private Kontrole Pozicija; public Zmijice() { InitializeComponent(); //slucajne pozicije pri startu x = rnd.Next(1, 20) * 20; y = rnd.Next(1, 20) * 20; c = rnd.Next(1, 20) * 20; b = rnd.Next(1, 20) * 20; Pozicija = Kontrole.Stoj; //stop on start } private void Tajmer_Tick(object sender, EventArgs e) { if (Pozicija == Kontrole.Stoj) { //none } if (Pozicija == Kontrole.Desno) { x += 20; } else if (Pozicija == Kontrole.Levo) { x -= 20; } else if (Pozicija == Kontrole.Gore) { y -= 20; } else if (Pozicija == Kontrole.Dole) { y += 20; } Invalidate(); } private void Zmijice_KeyDown(object sender, KeyEventArgs e) { if (e.KeyCode == Keys.Down) { Pozicija = Kontrole.Dole; } else if (e.KeyCode == Keys.Up) { Pozicija = Kontrole.Gore; } else if (e.KeyCode == Keys.Right) { Pozicija = Kontrole.Desno; } else if (e.KeyCode == Keys.Left) { Pozicija = Kontrole.Levo; } } private void Zmijice_Paint(object sender, PaintEventArgs e) { int counter = 1; //ako je pojela hranu if (x==c && y==b) { counter++; c = rnd.Next(1, 20); //nova pozicija hrrane c = c * 20; b = rnd.Next(1, 20); b = b * 20; //povecaj zmijicu SegmentX.Add(x); SegmentY.Add(y); //label1.Text = SegmentX.Count().ToString() + " " + SegmentY.Count().ToString(); //left if (Pozicija == Kontrole.Levo) { koox = x+20; kooy = y; } //right if (Pozicija == Kontrole.Desno) { koox = x-20; kooy = y; } //up if (Pozicija == Kontrole.Gore) { koox = x; kooy = y+20; } //down if (Pozicija == Kontrole.Dole) { koox = x; kooy = y-20; } } foreach (int k in SegmentX) { foreach (int l in SegmentY) { e.Graphics.FillRectangle(Brushes.Orange, koox, kooy, sirina, visina); } } koox = x; kooy = y; e.Graphics.FillRectangle(Brushes.DarkRed, x, y, sirina, visina); e.Graphics.FillRectangle(Brushes.Aqua, c, b, sirina, visina); } } }

    Read the article

< Previous Page | 156 157 158 159 160 161 162 163 164 165 166 167  | Next Page >