Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 446/924 | < Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >

  • Codeigniter: how should I restructure db schema?

    - by Kevin Brown
    I don't even know if that's the right term. May it be known that I'm a major novice! I have three tables: users, profiles, and survey. Each one has user_id as it's first field, (auto-increment for users), and they're all tied by a foreign key constraint, CASCADE on DELETE. Currently, for each user, let's say user_id 1, they have a corresponding db entry in the other tables. For profiles it lists all their information, and the survey table holds all their survey information. Now I must change things...darn scope creep. Users need the ability to have multiple survey results. I imagine that this would be similar to a comment table for a blog... My entire app runs around the idea that a single user is linked to a constraining profile and survey. How should I structure my db? How should I design my app's db so that a user can have multiple tests/profiles for the test? Please assist! Any advice, information and personal-knowledge is appreciated! Right now the only way I know how to accompany my client is to create a pseudo-user for each test (so unnecessary) and list them in a view table (called "your tests")-- these are obtained from the db by saying: where user_id=manager_id

    Read the article

  • How To Check My Current Version of FFMPEG

    - by aamiri
    I have FFMPEG installed on 2 different servers. On one of the servers, i run into an issue every time i try to convert m4v files where ffmpeg just processes the file indefinitely. When I take the same source file and try to run it on the other server it seems to work just fine. Both servers are running the same version of GNU/Linux. Some one suggested i check to see if the same version of ffmpeg is installed on the servers, so my question to you all is, "how do i check my ffmpeg version?" Thanks!

    Read the article

  • Low Throughput on Windows Named Pipe Over WAN

    - by MichaelB76
    I'm having problems with low performance using a Windows named pipe. The throughput drops off rapidly as the network latency increases. There is a roughly linear relationship between messages sent per second and round trip time. It seems that the client must ack each message before the server will send the next one. This leads to very poor performance, I can only send 5 (~100 byte) messages per second over a link with an RTT of 200 ms. The pipe is asynchronous, using multiple overlapped write operations (and multiple overlapped reads at the client end), but this is not improving throughput. Is it possible to send messages in parallel over a named pipe? The pipe is created using PIPE_TYPE_MESSAGE, would PIPE_READMODE_BYTE work better? Is there any other way I can improve performance? This is a deployed solution, so I can't simply replace the pipe with a socket connection (I've read that Windows named pipe aren't recommended for use over a WAN, and I'm wondering if this is why). I'd be grateful for any help with this matter.

    Read the article

  • Is there a quality, file-size, or other benefit to JPEG sizes being multiples of 8px or 16px?

    - by davebug
    The JPEG compression encoding process splits a given image into blocks of 8x8 pixels, working with these blocks in future lossy and lossless compressions. [source] It is also mentioned that if the image is a multiple 1MCU block (defined as a Minimum Coded Unit, 'usually 16 pixels in both directions') that lossless alterations to a JPEG can be performed. [source] I am working with product images and would like to know both if, and how much benefit can be derived from using multiples of 16 in my final image size (say, using an image with size 480px by 360px) vs. a non-multiple of 16 (such as 484x362). In this example I am not interested in further alterations, editing, or recompression of the final image. To try to get closer to a specific answer where I know there must be largely generalities: Given a 480x360 image that is 64k and saved at maximum quality in Photoshop [example]: Can I expect any quality loss from an image that is 484x362 What amount of file size addition can I expect (for this example, the additional space would be white pixels) Are there any other disadvantages to growing larger than the 8px grid? I know it's arbitrary to use that specific example, but it would still be helpful (for me and potentially any others pondering an image size) to understand what level of compromise I'd be dealing with in breaking the non-8px grid. The key issue here is a debate I've had is whether 8-pixel divisible images are higher quality than images that are not divisible by 8-pixels.

    Read the article

  • Windows Vista Home memory usage problem [closed]

    - by lordg
    Hi, I have a Windows Vista Home laptop from a client that is running on 1GB ram. The laptop is used for super basic things, word, internet, outlook, etc. What makes zero sense is that the RAM is being completely consumed, causing the PC to hang sometimes when it can't take it anymore. However, in task manager, the processes appear to only be consuming maybe 100MB (Private Working Set). The client literally has a simple setup, and is running kaspersky, though that does not seem to be indicating it is the cause of the excessive memory usage. Does anyone have a suggestion on how to resolve the memory issue or how to track down what is actually happening and fix it? G

    Read the article

  • What does this error mean (Can't create TCP/IP socket (24))?

    - by user105196
    I have web server with OS RHEL 6.2 and Mysql 5.5.23 on another server and the web server can read from Mysql server without problem, but some time I got this error: [Sun Sep 23 06:13:07 2012] [error] [client XXXXX] DBI connect('XXXX:192.168.1.2:3306','XXX',...) failed: Can't create TCP/IP socket (24) at /var/www/html/file.pm line 199. my question : What does this error mean (Can't create TCP/IP socket (24))? is it OS error or Mysql error ? perl -v This is perl, v5.10.1 (*) built for x86_64-linux-thread-multi mysql -V mysql Ver 14.14 Distrib 5.5.23, for Linux (x86_64) using readline 5.1 su - mysql -s /bin/bash -c 'ulimit -a' core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 127220 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 1024 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

    Read the article

  • How to Track CPU and Memory Usage Per Process

    - by Mjsk
    I have seen this question asked on here before but was unable to follow the answer which was given. I would like to monitor a processes CPU, Memory, and possibly GPU usage over a given time. The data would be useful if presented in a graph. It would be nice if I could do this using Performance Monitor, but I am open to alternative solutions as well. I have tried using Performance Monitor and my problem is that I'm not sure which performance counters to use since there are so many. I've been looking at a Process, Processor, Memory, etc. but I'm not sure which counters within those categories will be of interest to me. My OS is Windows 7.

    Read the article

  • issue with tab bar view displaying a compound view

    - by ambertch
    I created a tab bar application, and I make the first tab a table. So I create a tableView controller, and go about setting the class identity of the view controller for the first tab to my tableView controller. This works fine, and I see the contents of the table filling up the whole screen. However, this is not what I actually want in the end goal - I would like a compound window having multiple views: - the aforementioned table - a custom view with data in it So what I do is create a nib for this content (call it contentNib), change the tab's class from the tableView controller to a generic UIViewController, and set the nib of that tab to this new contentNib. In this new contentNib I drag on a tableView and set File's Owner to the TableViewController. I then link the dataSource and delegate to file's owner (which is TableViewController). Surprisingly this does not work and I receive the error: **Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '*** -[UIViewController tableView:numberOfRowsInSection:]: unrecognized selector sent to instance 0x3b0f910'** This is bewildering to me since the file's owner is the TableViewController, which has been assigned to be both the dataSource and delegate. Does someone have either insight into my confusions, or a link to an example of how to have a compound view include a tableView? *update* I see this in the Apple TableView programming guide: "Note: You should use a UIViewController subclass rather than a subclass of UITableViewController to manage a table view if the view to be managed is composed of multiple subviews, one of which is a table view. The default behavior of the UITableViewController class is to make the table view fill the screen between the navigation bar and the tab bar (if either are present)." <----- I don't really get what this is telling me to do though... if someone can explain or point me to an example I'd be much appreciated!

    Read the article

  • Graphing per-user CPU usage on a Linux machine

    - by mart1n
    I want to graph (graphical output would be great, i.e. a .png file) the following situation: I have users A, B, and C. I limit their resources so that when all users run a CPU intensive task at the same time, those processes will use 25%, 25%, and 50% of CPU. I know I can get the real-time stats using top but have no idea what to do with them. I've searched through the huge top man page but haven't found much on the subject of outputting data that can be graphed. Ideally, the graph would show a span of maybe 30 seconds. Any ideas how to achieve this?

    Read the article

  • Launch script after SFTP disconnect

    - by Mates
    I'm currently using Caja (basically the same as Nautilus) to connect using SSH to my server and work with files. What I'm looking for is a way to launch a simple script when I disconnect - I can launch a script after disconnecting from the TTY by putting it into ~/.bash_logout file, but that is not executed when disconnecting from a file manager. The only idea I have is to set up a cronjob which would be checking for existing sftp-server or sshd processes periodicaly and launched the script when there's no such process running. Is there any easier way to do this?

    Read the article

  • Comparison in Monit Permissions Testing

    - by beanland
    I'm trying to use Monit to check the permissions of a particular directory, but I only care that it's readable to all users. I don't care about any other permissions (write, execute) for the owner, group, or all. I also don't care about any special permissions. Knowing that I can't change the permissions of this directory, and with the possibility of another administrator changing these permissions without affecting my processes that rely on this directory (i.e., granting or revoking write access to the group), is it possible to check for a minimum permission in Monit? I have this which is currently working: check directory archive path /var/home/archive/ if failed perm 0755 then alert But I would like to have something like tihs: check directory archive path /var/home/archive/ if failed perm > 444 then alert This is failing for me. Is it possible to use comparison operators in Monit's permissions checking? If not, are there any workarounds?

    Read the article

  • Access denied to EFS encrypted files after PC joins domain

    - by mjmarsh
    I'm experiencing strange behavior with Windows Encrypted File System: I have a machine that is in workgroup mode (not joined to a domain) I encrypt an entire directory structure on the machine (basically a folder and subfolders with data files for my application). My application writes and reads files from the encrypted file hierarchy as a local Windows user (let's call the account 'SecureUser'). This works fine I then join the PC to a domain (Let's call it 'TEST') Afterwards, processes running as the local 'SecureUser' account can't read the files it wrote originally when it was off the domain (What is also strange is that the files are listed as "read only" now and I cannot unset this flag via Windows Explorer or the command line, even though it looks like it succeeds) I then 'un-join' the PC from the domain and everything works again Is there something about changing domain membership on a PC that changes the behavior of EFS so that previously encrypted files cannot be read, even by the originating user? Thanks in advance

    Read the article

  • Custom API requirement

    - by Jonathan.Peppers
    We are currently working on an API for an existing system. It basically wraps some web-requests as an easy-to-use library that 3rd party companies should be able to use with our product. As part of the API, there is an event mechanism where the server can call back to the client via a constantly-running socket connection. To minimize load on the server, we want to only have one connection per computer. Currently there is a socket open per process, and that could eventually cause load problems if you had multiple applications using the API. So my question is: if we want to deploy our API as a single standalone assembly, what is the best way to fix our problem? A couple options we thought of: Write an out of process COM object (don't know if that works in .Net) Include a second exe file that would be required for events, it would have to single-instance itself, and open a named pipe or something to communicate through multiple processes Extract this exe file from an embedded resource and execute it None of those really seem ideal. Any better ideas?

    Read the article

  • Performance Drop Lingers after Load [closed]

    - by Charles
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I'm noticing a drop in performance after subsequent load tests. Although our cpu and ram numbers look fine, performance seems to degrade over time as sustained load is applied to the system. If we allow more time between the load tests, the performance gets back to about 1,000 ms, but if you apply load every 3 minutes or so, it starts to degrade to a point where it takes 12,000 ms. None of the application servers are showing lingering apache processes and the number of database connections cools down to about 3 (from a sustained 20). Is there anything else I should be looking out for here?

    Read the article

  • What do I do about a Java program that spawned two instaces of itself?

    - by user288915
    I have a java JAR file that is triggered by a SQL server job. It's been running successfully for months. The process pulls in a structured flat file to a staging database then pushes that data into an XML file. However yesterday the process was triggered twice at the same time. I can tell from a log file that gets created, it looks like the process ran twice simultaneously. This caused a lot of issues and the XML file that it kicked out was malformed and contained duplicate nodes etc. My question is, is this a known issue with Java JVM's spawning multiple instances of itself? Or should I be looking at sql server as the culprit? I'm looking into 'socket locking' or file locking to prevent multiple instances in the future. This is the first instance of this issue that I've ever heard of. More info: The job is scheduled to run every minute. The job triggers a .bat file that contains the java.exe - jar filename.jar The java program runs, scans a directory for a file and then executes a loop to process if the file if it finds one. After it processes the file it runs another loop that kicks out XML messages. I can provide code samples if that would help. Thank you, Kevin

    Read the article

  • Disabling certain JBoss ports

    - by Rich
    We are trying to configure JBoss 5.1.0 to be as lightweight and as secure as possible. One of the parts of this process is to identify and close any ports we do not need. Three ports that we have outstanding but don't believe we need are: 4457 - bisocket 4712 - JBossTS Recovery Manager 4713 - JBossTS Transaction Status Manager We don't think we need any of these features (but could be wrong). Bisocket seems to be a way for JMS clients behind a firewall to communicate with JBoss. We hardly use JMS now and when we do, it is very unlikely that we will need this firewall traversing ability. I am less sure about whether we need the two JBossTS ports - I am guessing these are used in a clustered environment - we aren't clustered. So my question is, how do we disable these ports (and associated processes where possible), or if we need these ports, why do we need to keep them open?

    Read the article

  • Is there a case for parameterising using Abstract classes rather than Interfaces?

    - by Chris
    I'm currently developing a component based API that is heavily stateful. The top level components implement around a dozen interfaces each. The stock top-level components therefore sit ontop of a stack of Abstract implementations which in turn contain multiple mixin implementations and implement multiple mixin interfaces. So far, so good (I hope). The problem is that the base functionality is extremely complex to implement (1,000s of lines in 5 layers of base classes) and therefore I do not wish for component writers to implement the interfaces themselves but rather to extend my base classes (where all the boiler plate code is already written). If the API therefore accepts interfaces rather than references to the Abstract implementation that I wish for component writers to extends, then I have a risk that the implementer will not perform the validation that is both required and assumed by other areas of code. Therefore, my question is, is it sometimes valid to paramerise API methods using an abstract implementation reference rather than a reference to the interface(s) that it implements? Do you have an example of a well-designed API that uses this technique or am I trying to talk myself into bad-practice?

    Read the article

  • Exclude category from main RSS feed, but not all feeds

    - by jamEs
    I've got a blog that supplies content to multiple MailChimp newsletters via RSS. The first newsletter works fine, but the second I'm having issues with. The issue I have is that the second newsletter has "hidden" content. This content isn't meant for wide consumption, so it doesn't appear on the frontpage, but is accessible elsewhere on the site. The snafu with this is that not all of this content is hidden, just some of it, while other pieces of content for this newsletter could overlap with the first newsletter. This obviously makes excluding everything problematic, as they could be assigned multiple categories, some of which I wouldn't want hidden. The issue I'm running into is that I have a way to exclude this content from the frontpage, but not from the main RSS feed. I'm using WP Hide Post for this, which allows me to exclude from feed, which in turn removes it from all feeds, including the ones that feed the newsletter. I'm currently using /feed?cat=XXX to reference these feeds. Is there a way to make it so category feeds still work, just the main /feed RSS would exclude it?

    Read the article

  • Why doesn't Firefox redownload images already on a page?

    - by vvo
    Hello, i just read this article : https://developer.mozilla.org/en/HTTP_Caching_FAQ There's a firefox behavior (and some other browsers i guess) i'd like to understand : if i take any webpage and try to insert the same image multiple times in javascript, the image is only downloaded ONCE even if i specifiy all needed headers to say "do no ever use cache". (see article) I know there are workarounds (like addind query strings to end of urls etc) but why do firefox act like that, if i say that an image do not have to be cached, why is the image still taken from cache when i try to re-insert it ? Plus, what cache is used for this ? (I guess it's the memory cache) Is this behavior the same for dynamic inclusion for example ? ANSWSER IS NO :) I just tested it and the same headers for a js script will make firefox redownload it each time you append the script to the DOM. PS: I know you're wondering WHY i need to do that (appending same image multiple times and force to redownload but this is the way our app works) thank you The good answer is : firefox will store images for the current page load in the memory cache even if you specify he doesnt have to cache them. You can't change this behavior but this is odd because it's not the same for javascript files for example Could someone explain or link to a document describing how firefox cache WORKS?

    Read the article

  • Retrieve POST data without knowing exact number of fields

    - by James
    Hi all! I'm creating an online poll from scratch which will be held in a database. I'm working on getting a system set up so someone can create a new poll. I will be having the user fill out a simple HTML form with the Questions and Answers (there may be several answers). The user will be able to add multiple questions and multiple answers for each question. As the total number of questions and answers will be decided by the user, I need to create some clever PHP to cater for this - however many there are. When dealing with a static number of questions, it's simple. But I'm having trouble thinking of a way to get all the POST data into individual PHP variables so I can process them. I was thinking of using a foreach loop, anyone got any ideas? Sorry for the long winded description! If anyone needs anything clarified, I'd be happy to do so. My problem is that I can't get my head around how to deal with the POST values when I don't know exactly which element of the array will contain what. If things were static with a set number of questions and answers, I'd know $_POST[0] was Question1, etc Thank you! =)

    Read the article

  • Looping through a method without for/foreach/while

    - by RichK
    Is there a way of calling a method/lines of code multiple times not using a for/foreach/while loop? For example, if I were to use to for loop: int numberOfIterations = 6; for(int i = 0; i < numberOfIterations; i++) { DoSomething(); SomeProperty = true; } The lines of code I'm calling don't use 'i' and in my opinion the whole loop declaration hides what I'm trying to do. This is the same for a foreach. I was wondering if there's a looping statement I can use that looks something like: do(6) { DoSomething(); SomeProperty = true; } It's really clear that I just want to execute that code 6 times and there's no noise involving index instantiating and adding 1 to some arbitrary variable. As a learning exercise I have written a static class and method: Do.Multiple(int iterations, Action action) Which works but scores very highly on the pretentious scale and I'm sure my peers wouldn't approve. I'm probably just being picky and a for loop is certainly the most recognisable, but as a learning point I was just wondering if there (cleaner) alternatives. Thanks. (I've had a look at this thread, but it's not quite the same) http://stackoverflow.com/questions/2248985/using-ienumerable-without-foreach-loop

    Read the article

  • No Cure for a Slow Computer?

    - by Marv
    I have a laptop with the following specs: 2.2gHz dual-core processor. 4gb of DDR2 Ram. 180gb HDD space. I have tried everything. I have reinstalled the OS. Installed Ubuntu with Lubuntu, LXDE, Gnome Classic, Unity 2D desktop. I have even tried downgrading to XP with all non-critical processes and services turned off. Even with the most stripped down version of Ubuntu it heats up and the fan starts churning. I'm out of ideas. I have tried everything. If you have any tips, please help. :'(

    Read the article

  • Apache with mod_perl eating memory when idle

    - by syneticon-dj
    An Apache webserver running a mod_perl application is exposing abnormal memory usage - after the "day load" ceases, the system's memory is being exhausted by the Apache processes and oom_killer is being invoked. As the load returns the following morning, the memory usage normalizes - probably because Apache workers get recycled periodically if a sufficient number of hits is generated: This is the graph for apache hits per second to correlate: The remaining 2 hits per second throughout the night are induced by HAProxy checks - it runs HEAD http://mydomain.example.com/running HTTP/1.0 requests against the server every half a second with "running" being a static file (i.e. not invoking any perl code). It also seems that disabling these checks remedies the memory usage problem, but obviously cannot be a solution. All of 3 similarly configured servers (behind HAProxy) expose this behavior. The running OS is Ubuntu 10.10, Apache version 2.2.16. This seems to be a memory leak but I have no idea how to start debugging it - any hints?

    Read the article

  • How to monitor RAM usage for Hyper-V VMs ?

    - by Mac
    A bit of context first : on Windows 2008 Standard x64 with 8Gb RAM, I have 5 VMs running which should take up 1664Mb RAM (3*256Mb+384Mb+512Mb). There is nothing else running on this server except the basic OS components (this not a Core installation). I know that each VM will use more RAM on the host than what has been configured in Hyper-V. But when I run the task manager, it says 6.7Gb used ! If I sum up the RAM used by each process in the task manager (showing all users processes), I get to something around 1Gb... So : how can I check how much RAM each VM is really using on the host (it does not seem to be available via task manager) ? Note that I am aware of the fact that my problem could be unrelated to VM RAM usage, but I would still very much like to know how to do this.

    Read the article

  • Does using functional languages help against computing values repeatedly?

    - by sharptooth
    Consider a function f(x,y): f(x,0) = x*x; f(0,y) = y*(y + 1); f(x,y) = f(x,y-1) + f(x-1,y); If one tries to implement that recursively in some language like C++ he will encounter a problem. Suppose the function is first called with x = x0 and y = y0. Then for any pair (x,y) where 0 <= x < x0 and 0 <= y < y0 the intermediate values will be computed multiple times - recursive calls will form a huge tree in which multiple leaves will in fact contain the same pairs (x,y). For pairs (x,y) where x and y are both close to 0 values will be computed numerous times. For instance, I tested a similar function implemented in C++ - for x=20 and y=20 its computation takes about 4 hours (yes, four Earth hours!). Obviously the implementation can be rewritten in such way that repeated computation doesn't occur - either iteratively or with a cache table. The question is: will functional languages perform any better and avoid repeated computations when implementing a function like above recursively?

    Read the article

< Previous Page | 442 443 444 445 446 447 448 449 450 451 452 453  | Next Page >