Search Results

Search found 23098 results on 924 pages for 'multiple processes'.

Page 437/924 | < Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >

  • How can I tell whether a webpart that has been deployed to a site is a native webpart that ships wit

    - by program247365
    I have a SharePoint 2007 MOSS instance, and I'm on a fact-finding mission. There have been multiple developers, developing multiple webparts and deploying them (using VS2005/2008 SharePoint Extensions). I thought maybe I could look at the fields in the "Web Part Gallery" list in my site, and look by "Modified by", but it looks like a developer's name is on some of the out-of-the-box webparts somehow, and on ones I know are custom developed, they say "System Account" - so looking at that field in this list is a no go. I thought then maybe I could look at the "Group" to which each webpart was assigned but it looks like they were arbitrarily assigned to many different groups inconsistently - so using that piece of information is a no go. Here is my code I have for just looping through and getting the names of all the webparts. Is there any property I can access on the list items of webparts that would tell me whether it's a custom developed webpart? Any way to distinguish the custom webparts from the out-of-the-box ones? Is there another way to do this? #region Misc Site Collection Methods public static List<string> GetAllWebParts(string connectedSPInstanceUrl) { List<string> lstWebParts = new List<string>(); try { using (SPSite site = new SPSite(connectedSPInstanceUrl)) { using (SPWeb web = site.OpenWeb()) { SPList list = web.Lists["Web Part Gallery"]; foreach (SPListItem item in list.Items) { lstWebParts.Add(item.Name); } } } } catch (Exception ex) { lstWebParts.Add("Error"); lstWebParts.Add("Message: " + ex.Message); lstWebParts.Add("Inner Exception: " + ex.InnerException.ToString()); lstWebParts.Add("Stack Trace: " + ex.StackTrace); } return lstWebParts; } #endregion

    Read the article

  • "Slave" user accounts in GNU/Linux

    - by Vi
    How to make one user account to be like root for some other user account, e.g. to be able to read, write, chmod all it's files, chown from this account to master and back, kill/ptrace all it's processes and to all thinks root can, but limited only to that particular slave account? Now I'm simulating this by allowing "master" user to "sudo -u slaveuser" and setting setfacl -dRm u:masteruser:rwx ~slaveuser. It is useful as I run most desktop programs in separate user accounts, but need to move files between them sometimes. If it requires some simple kernel patch it is OK.

    Read the article

  • SQL Server backup and restore process

    - by Nai
    Just wondering what backup processes you guys have. I am currently operating a weekly full database backup with daily differential backups. My understanding is that with such a set up, the difference between Full recovery mode and Simple recovery mode is that with Full recovery mode, I will be able to use the transaction logs to rollback my DB to a specific point in time having applied the latest differential backup. Assuming that in my scenario, the last differential backup serves as my last and ultimate 'save point', I don't see a need to rollback my DB even further back using the logs. This brings me to my question: Is there any additional benefits to be had using a Full recovery mode for my current backup process?

    Read the article

  • Should I split my website into different servers

    - by Nyxynyx
    I have a website where a user uploads photos, the photos gets resized and thumbnailed, and stored on the server. At the same time, there are some INSERTS into a MySQL table regarding the photo uploaded (like description, user id etc). The site currently runs off a managed VPS, and I love the support it provides. However it is expensive to store the many small photos and the resizing and thumbnailing processes do cause spikes on the app performance. (Amazon S3 is pretty expensive, especially considering the costs for uploading many small files) Question: Will it be a good idea to move the image processing operations and image storage to another server which is an unmanaged dedicated server with a much lower cost/gb and keep the current VPS for its 24/7 support and hosting the webapp? Or should I move the entire site to the dedicated server? VPS Specs 16 cores 2.4GHz (E5620) 1GB memory 60GB Storage 3.5TB transfer $43/mth Managed (24/7) Dedicated Specs i3 2130 2 cores 3.4+ GHz 16 GB DDR3 2 x 1TB SATA2 storage 15 TB transfer $79/mth Unmanaged (Weekdays support) Software used Apache PHP MySQL Solr PostgreSQL ImageMagick

    Read the article

  • 64-bit Alternative for Microsoft Jet

    - by David Robison
    Microsoft has chosen to not release a 64-bit version of Jet, their database driver for Access. Does anyone know of a good alternative? Here are the specific features that Jet supports that I need: Multiple users can connect to database over a network. Users can use Windows Explorer to copy the database while it is open without risking corruption. Access currently does this with enough reliability for what my customers need. Works well in C++ without requiring .Net. Alternatives I've considered that I do not think could work (though my understanding could be incorrect): SQLite: If multiple users connect to the database over a network, it will become corrupted. Firebird: Copying a database that is in use can corrupt the original database. SQL Server: Files in use are locked and cannot be copied. VistaDB: This appears to be .Net specific. Compile in 32-bit and use WOW64: There is another dependency that requires us to compile in 64-bit, even though we don't use any 64-bit functionality.

    Read the article

  • Remote kill, upload, execute file

    - by Masoud M.
    I'm developing a program and I need to upload my xyz.exe file to many host machines and execute them frequently. I need a server-client tool to do it as below steps after an update signal from my PC: Those host machine should kill any running processes with name xyz.exe. Download my new xyz.exe. Then execute new xyz.exe. I know about some tools like PsExec, but I need a tools with better user-interface and more powerful. Is there any tool to do it ? UPDATE: The systems are in a same LAN, OS is windows (XP or 7), No full remote access is needed. I'm a developer and my program should run in remote hosts and I'm testing my application.

    Read the article

  • monitor just what's going on -on a firewall

    - by bbutle01
    I have this little snapgear firewall. It's a little purpose built box running a custom linux, SH4 processor @240 Mhz, 64MB of ram. Basically how close we are to capacity is a mystery to me. I know I can run top and see the status of all the processes, but how can I see just how much of the processor is going to passing data... and how to estimate when I'm going to need to upgrade, and by tweaking iptables rules, how does that help/hurt the processor. suggestions?

    Read the article

  • How to automatically resume php-fpm?

    - by alfish
    I am using nginx+php-fpm on Debian Squeeze for a busy server and have had great difficulty to deal with maximum connections being reached. Here the problem is that php processes sometimes just die randomly under high load and leave the server with no php process. Then I need to manually restart php5-fpm service to bring back the server to life. I am wondering how to avoid this to happen, or at least treat the symptoms by restarting the php5-fpm automatically whenever there is not php process left to listen to incoming requests. My relevant configs are: pm = dynamic pm.max_children = 1400 pm.start_servers = 10 pm.max_spare_servers = 20 pm.process_idle_timeout = 1s; #not sure it will be useful when pm=dynamic pm.max_requests = 100000 request_terminate_timeout = 30 I appreciate your suggestions to cope with this nasty problem.

    Read the article

  • Laptop turns off after 20 minutes of use

    - by Christoph
    My laptop a sony vaio VGN-NW11S http://www.trustedreviews.com/Sony-VAIO-VGN-NW11S-S---15-5in-Laptop_Laptop_review. Everytime i turn it on, in safe mode or not, if i try to open an application i.e. run a process such as google chrome or event viewer, defrag, virus scan, it completely turns off without warning, nor giving a trace of events the next time I switch it on. Apart from that, I had worries it might be my battery or power supply but I dont think it is that, I took the laptop apart cleaning fans etc. and have ordered some cpu paste as I checked to see the condition of the processor. I will post to see if re-applying the paste works. One more thing, when the heavy processes kick in, the fan starts to make a lot of noise, maybe trying to cool down the CPU? Any ideas on what else it could be and what I could do to test what is wrong?

    Read the article

  • Website: Requested filename being rewritten

    - by horatio
    I have been unable to find an answer via search. I have a website (I do not administer the servers) where the server will serve a different file than the one requested. I first noticed this when using a filename of the following form: _foo.php (single underscore) If I request foo.php (does not exist), the server returns _foo.php. By "returns" I mean that the server decides I meant _foo.php, processes the php file, and serves the output. If I request afoo.php, zfoo.php, or even __foo.php (two underscores) (these files do not exist) the server returns _foo.php. If I request aafoo.php, the server returns 404. To sum up: the server seems to be doing a partial filename match. My question is: what is happening and is this accepted behavior for a web server (or standard behavior of a common mod/package/etc)?

    Read the article

  • How to choose a web server for a Python application?

    - by Phil
    Information and prerequisites: I have a project which is, at its core, a basic CRUD application. It doesn't have long running background processes which it forks at the beginning and talks to later on, nor does it have long running queries or kept alive connection requirements. It receives a request, makes some queries to the database and then responds. In order to serve static files and cachable files fast, I am going to use Varnish in all cases. Here is my question: After reading about various Python web application servers, I have seen that they all have their "fans" for certain, usually "personal" reasons, which got me confused since each usecase differs from the next. How can I learn about the core differentiating factors of Python web servers (in order) to decide how suitable they are for my project and if one would be better than the other? What are your (technically provable) thoughts on the matter? How should I choose a Python web server? Thank you.

    Read the article

  • What are the benefits of running a app server in user space, like Unicorn, as opposed to as sudo?

    - by dan
    I've been using Phusion Passenger + Rails/Sinatra for a lot of projects. Passenger runs under the main Nginx or Apache process. But I'm interested in Unicorn, partly because it runs in user space. You just set up Nginx to proxy_pass requests to a unix socket that is connected to Unicorn processes that you fire up under a normal user account. Is there anything to be said as far as advantages and disadvantages of these two alternative approaches to running an web app? I mean in terms of ease of administration, stability, simplicity, etc.

    Read the article

  • How to extract block of XML from a log file on Linux

    - by dragonmantank
    I have a log file that looks like the following: 2010-05-12 12:23:45 Some sort of log entry 2010-05-12 01:45:12 Request XML: <RootTag> <Element>Value</Element> <Element>Another Value</Element> </RootTag> 2010-05-12 01:45:32 Response XML: <ResponseRoot> <Element>Value</Element> </ResponseRoot> 2010-05-12 01:45:49 Another log entry What I want to do is extract the Request and Response XML (and ultimately dump them into their own single files). I had a similar parser that used egrep but the XML was all on one line, not multiple ones like above. The log files are also somewhat large, hitting 500-600 megs a log. Smaller logs I would read in via a PHP script and use regex matching, but the amount of memory required for such a large file would more than likely kill the script. Is there an easy way using the built-in tools on a Linux box (CentOS in this case) to extract multiple lines or am I going to have to bite the bullet and use Perl or PHP to read in the entire file to extract it?

    Read the article

  • Apache - mod_pagespeed freezes my website

    - by Jonathan Rioux
    I have installed the mod_pagepseed module for Apache. I am using Debian so I downloaded the .deb file, and installed it successfully. I then configured some filters, and it worked like a charm for some minutes. Then after something like 10 minutes, my website no longer responded to the requests. When I was requesting for my website, it said "Waiting for www.blablabla.com" and I never got the page back from the server. I checked the processes running on my Debian box with top -d 0.5, and nothing eats up the CPU. To make my website responding to requests again, I must do a /etc/init.d/apache2 restart. And then it works again with mod_modspeed applying it's filters for a couple of minutes, and no more response again. How can I diagnose this issue? Is there some other configurations in the mod_pagespeed.conf file that I must set?

    Read the article

  • Apache on Mac Mavericks issue

    - by Michael
    Trying to run Apache so that I can create a testing server on my mac.When I start apache it starts, but it doesn't run (no connection to local host. Ill upload the unix,you'll see that after starting there is no processes, and I did a check to show you what was running on my port 80... I don't entirely know that means. Michaels-MacBook-Pro-3:~ michaelramos$ sudo apachectl start Michaels-MacBook-Pro-3:~ michaelramos$ ps aux | grep httpd michaelramos 348 0.0 0.0 2442000 624 s000 S+ 8:51AM 0:00.00 grep httpd Michaels-MacBook-Pro-3:~ michaelramos$ sudo apachectl start org.apache.httpd: Already loaded Michaels-MacBook-Pro-3:~ michaelramos$ sudo lsof -i ':80' COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME ocspd 96 root 18u IPv4 0x8402f926599c58df 0t0 TCP dhcp-92-67.radford.edu:49267->108.162.232.196:http (ESTABLISHED) ocspd 96 root 20u IPv4 0x8402f926599c58df 0t0 TCP dhcp-92-67.radford.edu:49267->108.162.232.196:http (ESTABLISHED) ocspd 96 root 21u IPv4 0x8402f926599c50f7 0t0 TCP dhcp-92-67.radford.edu:49268->108.162.232.206:http (ESTABLISHED) ocspd 96 root 23u IPv4 0x8402f926599c50f7 0t0 TCP dhcp-92-67.radford.edu:49268->108.162.232.206:http (ESTABLISHED)

    Read the article

  • MySQL won't stop doing stuff

    - by Felix
    Sorry for the title of the question, here's my problem: I've been trying to set up some scripts that import a lot of stuff hourly from an external source. They seemed to work fine, so I set up a cronjob to run them every hour. One day later I find six or seven instances of that script just hogging the MySQL server, making it unresponsive. I killed their processes, but MySQL was still not responding. I had to kill MySQL, reboot and then MySQL started working again (who knows on what) and being unresponsive (yes, I did remove the scripts from the cronjobs). I SHOW PROCESSLISTed and killed every process I could find. Still nothing, MySQL is hogging the HDD and is at the top of top and making the server load go up in the sky. I don't know what to do, if I kill and start it again it will probably do the same thing. What should I do?

    Read the article

  • MySql Query lag time?

    - by Click Upvote
    When there are multiple PHP scripts running in parallel, each making an UPDATE query to the same record in the same table repeatedly, is it possible for there to be a 'lag time' before the table is updated with each query? I have basically 5-6 instances of a PHP script running in parallel, having been launched via cron. Each script gets all the records in the items table, and then loops through them and processes them. However, to avoid processing the same item more than once, I store the id of the last item being processed in a seperate table. So this is how my code works: function getCurrentItem() { $sql = "SELECT currentItemId from settings"; $result = $this->db->query($sql); return $result->get('currentItemId'); } function setCurrentItem($id) { $sql = "UPDATE settings SET currentItemId='$id'"; $this->db->query($sql); } $currentItem = $this->getCurrentItem(); $sql = "SELECT * FROM items WHERE status='pending' AND id > $currentItem'"; $result = $this->db->query($sql); $items = $result->getAll(); foreach ($items as $i) { //Check if $i has been processed by a different instance of the script, and if so, //leave it untouched. if ($this->getCurrentItem() > $i->id) continue; $this->setCurrentItem($i->id); // Process the item here } But despite of all the precautions, most items are being processed more than once. Which makes me think that there is some lag time between the update queries being run by the PHP script, and when the database actually updates the record. Is it true? And if so, what other mechanism should I use to ensure that the PHP scripts always get only the latest currentItemId even when there are multiple scripts running in parrallel? Would using a text file instead of the db help?

    Read the article

  • Best practices, PHP, tracking millions of impressions per day.

    - by John
    What do I have to do to make 20k mysql inserts per second possible (during peak hours around 1k/sec during slower times)? I've been doing some research and I've seen the "INSERT DELAYED" suggestion, writing to a flat file, "fopen(file,'a')", and then running a chron job to dump the "needed" data into mysql, etc. I've also heard you need multiple servers and "load balancers" which I've never heard of, to make something like this work. I've also been looking at these "cloud server" thing-a-ma-jigs, and their automatic scalability, but not sure about what's actually scalable. The application is just a tracker script, so if I have 100 websites that get 3 million page loads a day, there will be around 300 million inserts a day. The data will be ran through a script that will run every 15-30 minutes which will normalize the data and insert it into another mysql table. How do the big dogs do it? How do the little dogs do it? I can't afford a huge server anymore so any intuitive ways, if there are multiple ways of going at it, you smart people can think of.. please let me know :)

    Read the article

  • Unable to logoff, disconnect, or reset terminal server user in production environment

    - by l0c0b0x
    I'm looking for some ideas on how to disconnect, logoff, or reset a user's session in a 2008 Terminal Server (unable to login as the user either as it is completely locked-up). This is a production environment, so rebooting the server or doing something system-wide is out of the question for now. Any Powershell tricks to help us with this? We've tried to disconnect, log the user off and reset the session as well as killing the session's processes too, directly from the same terminal server (from the task manager, Terminal Services Manager and the Resource Monitor) with no results. Help! UPDATE: We ended up rebooting the server as no other attempts that we could think of worked. I'll leave this question open hoping someone might have more information about this one issue, and it's potential fixes

    Read the article

  • Creating a blocking Queue<T> in .NET?

    - by spoon16
    I have a scenario where I have multiple threads adding to a queue and multiple threads reading from the same queue. If the queue reaches a specific size all threads that are filling the queue will be blocked on add until an item is removed from the queue. The solution below is what I am using right now and my question is: How can this be improved? Is there an object that already enables this behavior in the BCL that I should be using? internal class BlockingCollection<T> : CollectionBase, IEnumerable { //todo: might be worth changing this into a proper QUEUE private AutoResetEvent _FullEvent = new AutoResetEvent(false); internal T this[int i] { get { return (T) List[i]; } } private int _MaxSize; internal int MaxSize { get { return _MaxSize; } set { _MaxSize = value; checkSize(); } } internal BlockingCollection(int maxSize) { MaxSize = maxSize; } internal void Add(T item) { Trace.WriteLine(string.Format("BlockingCollection add waiting: {0}", Thread.CurrentThread.ManagedThreadId)); _FullEvent.WaitOne(); List.Add(item); Trace.WriteLine(string.Format("BlockingCollection item added: {0}", Thread.CurrentThread.ManagedThreadId)); checkSize(); } internal void Remove(T item) { lock (List) { List.Remove(item); } Trace.WriteLine(string.Format("BlockingCollection item removed: {0}", Thread.CurrentThread.ManagedThreadId)); } protected override void OnRemoveComplete(int index, object value) { checkSize(); base.OnRemoveComplete(index, value); } internal new IEnumerator GetEnumerator() { return List.GetEnumerator(); } private void checkSize() { if (Count < MaxSize) { Trace.WriteLine(string.Format("BlockingCollection FullEvent set: {0}", Thread.CurrentThread.ManagedThreadId)); _FullEvent.Set(); } else { Trace.WriteLine(string.Format("BlockingCollection FullEvent reset: {0}", Thread.CurrentThread.ManagedThreadId)); _FullEvent.Reset(); } } }

    Read the article

  • How does MySQL 5.5 and InnoDB on Linux use RAM?

    - by Loren
    Does MySQL 5.5 InnoDB keep indexes in memory and tables on disk? Does it ever do it's own in-memory caching of part or whole tables? Or does it completely rely on the OS page cache (I'm guessing that it does since Facebook's SSD cache that was built for MySQL was done at the OS-level: https://github.com/facebook/flashcache/)? Does Linux by default use all of the available RAM for the page cache? So if RAM size exceeds table size + memory used by processes, then when MySQL server starts and reads the whole table for the first time it will be from disk, and from that point on the whole table is in RAM? So using Alchemy Database (SQL on top of Redis, everything always in RAM: http://code.google.com/p/alchemydatabase/) shouldn't be much faster than MySQL, given the same size RAM and database?

    Read the article

  • Real time embeddable http server library required

    - by Howard May
    Having looked at several available http server libraries I have not yet found what I am looking for and am sure I can't be the first to have this set of requirements. I need a library which presents an API which is 'pipelined'. Pipelining is used to describe an HTTP feature where multiple HTTP requests can be sent across a TCP link at a time without waiting for a response. I want a similar feature on the library API where my application can receive all of those request without having to send a response (I will respond but want the ability to process multiple requests at a time to reduce the impact of internal latency). So the web server library will need to support the following flow 1) HTTP Client transmits http request 1 2) HTTP Client transmits http request 2 ... 3) Web Server Library receives request 1 and passes it to My Web Server App 4) My Web Server App receives request 1 and dispatches it to My System 5) Web Server receives request 2 and passes it to My Web Server App 6) My Web Server App receives request 2 and dispatches it to My System 7) My Web Server App receives response to request 1 from My System and passes it to Web Server 8) Web Server transmits HTTP response 1 to HTTP Client 9) My Web Server App receives response to request 2 from My System and passes it to Web Server 10) Web Server transmits HTTP response 2 to HTTP Client Hopefully this illustrates my requirement. There are two key points to recognise. Responses to the Web Server Library are asynchronous and there may be several HTTP requests passed to My Web Server App with responses outstanding. Additional requirements are Embeddable into an existing 'C' application Small footprint; I don't need all the functionality available in Apache etc. Efficient; will need to support thousands of requests a second Allows asynchronous responses to requests; their is a small latency to responses and given the required request throughput a synchronous architecture is not going to work for me. Support persistent TCP connections Support use with Server-Push Comet connections Open Source / GPL support for HTTPS Portable across linux, windows; preferably more. I will be very grateful for any recommendation Best Regards

    Read the article

  • Why is hibernation still used?

    - by Moses
    I've never quite understood the original purpose of the Hibernation power state in Windows. I understand how it works, what processes take place, and what happens when you boot back up from Hibernate, but I've never truly understood why it's used. With today's technology, most notably with SSDs, RAM and CPUs becoming faster and faster, a cold boot on a clean/efficient Windows installation can be pretty fast (for some people, mere seconds from pushing the power button). Standby is even faster, sometimes instantaneous. Even SATA drives from 5-6 years ago can accomplish these fast boot times. Hibernation seems pointless to me when modern technology is considered, but perhaps there are applications that I'm not considering. What was the original purpose behind hibernation, and why do people still use it? Edit: I rescind my comment about hibernation being obsolete, as it obviously has very practical applications to laptops and mobile PCs, considering the power restrictions. I was mostly referring to hibernation being used on a desktop.

    Read the article

  • Drupal-- How to place an image in Panels 3 panels and mini-panels, w/o views or nodes?

    - by msumme
    Is it possible, through any modular functionality, to insert an image into a (mini-)panel, either through token replacement, or through an upload dialog, or through a file selection menu? Do I have to use views? Do I have to create nodes? Would the best way be to make a panel node, and then embed it in a mini-node, if I want a block-like panel that can be placed on multiple pages? I want to build a site with images in a particular layout as a small block, and make it very easy for my client to change those images in the future. I can think of some other ways to make this work, but it's driving me crazy that there seems to be no way to simply PUT an image in a mini-panel without having to upload it and hard-code an image tag. And since my client knows no HTML, coding it this way makes it un-helpful for him. And this mini-panel block is going to be used on a number of pages, and needs to be easily modified. I have been googling for about 45 minutes, and come up with nothing useful. EDIT: OR EVEN just put ONLY one image from an image field w/ multiple values in a panel region on a panel node?

    Read the article

  • Is there an ORM that supports composition w/o Joins

    - by Ken Downs
    EDIT: Changed title from "inheritance" to "composition". Left body of question unchanged. I'm curious if there is an ORM tool that supports inheritance w/o creating separate tables that have to be joined. Simple example. Assume a table of customers, with a Bill-to address, and a table of vendors, with a remit-to address. Keep it simple and assume one address each, not a child table of addresses for each. These addresses will have a handful of values in common: address 1, address 2, city, state/province, postal code. So let's say I'd have a class "addressBlock" and I want the customers and vendors to inherit from this class, and possibly from other classes. But I do not want separate tables that have to be joined, I want the columns in the customer and vendor tables respectively. Is there an ORM that supports this? The closest question I have found on StackOverflow that might be the same question is linked below, but I can't quite figure if the OP is asking what I am asking. He seems to be asking about foregoing inheritance precisely because there will be multiple tables. I'm looking for the case where you can use inheritance w/o generating the multiple tables. Model inheritance approach with Django's ORM

    Read the article

< Previous Page | 433 434 435 436 437 438 439 440 441 442 443 444  | Next Page >