Search Results

Search found 22900 results on 916 pages for 'pascal case'.

Page 442/916 | < Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >

  • Graphing/charting of CPU utilisation [on hold]

    - by Peter
    So nagios can be good at graphing particular resource utilisation or other metrics, but I'm looking for a tool that can output a chart or other graphical representation of how much CPU time/CPU utilisation /all/ services on a server are currently consuming. I think New Relic could probably achieve this to an extent, but I was wondering if there was a popular open source app used for this. In case I am explaining this in a bad way, my actual problem is that I have a shared server with suexec enabled (ie. httpd cgi running under multiple user accounts). I'd like to know which users are using the most CPU during periods of the day.

    Read the article

  • Welcome to the Java Training Beat!

    - by tmcginn
    We are a group of dedicated training developers for Java, located in the US, India, and now Mexico. In this blog we will announce new training content and events that might be of interest to our readers. In this first installment of the Java Training Beat, I would like to introduce three new Oracle By Example (OBE) modules I recently released and posted to the Oracle Online Learning Library. Creating a Simple Java Message Service (JMS) Producer with NetBeans and GlassFish - covers how to create a simple text message producer with NetBeans 7 and GlassFish. Creating Java Message Service (JMS) Resources in WebLogic Server 12c - covers how to create JMS resources using the console and WebLogic Server 12c. With this tutorial, you can replicate the results of the first tutorial in WebLogic. Creating a Publish/Subscribe Model with Message-Driven Beans and GlassFish Server - covers how to create a publish/subscribe application using JMS. This tutorial includes a short case study that includes a JSF front-end application that sends a hotel reservation request object to the server as a MapMessage. Hope you find these useful!  And do check out the Online Learning Library - we have a wide range of additional content posted and more being added every month!

    Read the article

  • 'hijacking' gui and mapping certain controls to certain functions. VNC. TouchOSC

    - by Nick
    I need a VNC LIKE application which rather than sharing the screen, can take control of a specific application and then share that functionality across a network to multiple clients. Obviously, VNC requires use of the mouse and therefore only one user can do something at one time, this is NOT what I am after. I am after something that can hijack the graphic user interface, map certain controls, and then display them in another piece of software (perhaps like TOUCHOSC) The software I would like to map and share is called YAMAHA STUDIO MANAGER and is used to control certain Yamaha audio hardware, and in my case a Yamaha LS9 and M7CL mixing console. Its free.

    Read the article

  • EmblaCom Oy Maximizes Database Availability and Reduces Costs with MySQL Cluster

    - by Bertrand Matthelié
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Headquartered in Finland, EmblaCom Oy provides turnkey and cloud-hosted voice solutions to mobile operators around the globe. Since launching the original mobile private branch exchange (PBX) in 1998, the company has focused on helping its partners provide efficient voice communications to their key business customers. The company’s voice solutions are used by millions of subscribers, worldwide. EmblaCom Oy needed to replace several database engines with a standardized, scalable, development-friendly database solution to maximize availability and cut costs. The company chose MySQL Cluster Carrier Grade Edition, which has maximized accessibility to EmblaCom’s services for its clients and their hundreds of thousands of subscribers. The initiative has also reduced, by half, the cost of the database solution installation for customers, as well as lowered maintenance and customer service costs. Read the entire case study here.

    Read the article

  • Google Music Player doesn't work

    - by EricoPF
    I'm trying to log in on Google Music Player application but doesn't work. I get the message below: Login Failed Could not identify your computer. Learn More On Google Help it says that it doesn't run on virtual machines, which is not my case, but I have a virtualbox installed though, and it says some people get to work if they disable their bridge network. The thing is I don't have any bridge interface, even if I remove all virtualbox modules I still get this message. This is my ifconfig output: lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:15374 errors:0 dropped:0 overruns:0 frame:0 TX packets:15374 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1455889 (1.4 MB) TX bytes:1455889 (1.4 MB) wlan0 Link encap:Ethernet HWaddr 94:db:c9:b2:1b:d7 inet addr:192.168.1.100 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::96db:c9ff:feb2:1bd7/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:828467 errors:0 dropped:0 overruns:0 frame:0 TX packets:568040 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1086663025 (1.0 GB) TX bytes:72984931 (72.9 MB) Any ideias ? Thanks guys! Cheers.

    Read the article

  • How to uploads to the web work on local networks

    - by Saif Bechan
    Let's say I have two computers hooked up as a home network. They both use the same router, and the router is hooked up to the to the net. Now lets say I am working on computer A, and I can access files on computer B. Computer A has a drive that is mounted on computer A as a network drive. Now I want to upload a file to a website. In the browser of computer A I open a browser, and go the website. On the website I select 'upload file', now in the file browser I go to the network drive, and select a file on computer B to upload. What happens in this case. Is the file uploaded directly from computer B to the website, or is the file first transferred to computer A, and then to the website.

    Read the article

  • What is a "good" tool to password-protect .pdf files?

    - by Marius Hofert
    What is a "good" tool to encrypt (password protect) .pdf files? (without being required to buy additional software; the protection can be created under linux but the password query should work on Windows, too) I know that zip can do it: zip zipfile_name_without_ending -e files_to_encrypt.foo What I don't like about this is that for a single file, you have to use Winzip to open the zip and then click the file again. I rather would like to be prompted for a password when opening the .pdf (single file case). I know that pdftk can do this: pdftk foo.pdf output foo_protected.pdf user_pw mypassword. The problem here is that the password is displayed in the terminal -- even if you use ... user_pw PROMPT. But in the end you get a password-protected .pdf and you are prompted for the password when opening the file.

    Read the article

  • rsnapshot stats

    - by Obscur Moirage
    I'd like to retrieve the following stats from rsnapshot files synced added files modded files deleted files Is there a feature to retrieve these in rsnapshot, or is there another product that's able to do it? EDIT: As requested, I'll try to show that I'm not just asking what I want to do without any research. I wasn't able to locate any rsnapshot feature doing this. Maybe I'm searching in a wrong direction. So, I've built a not very pretty script, called each time before rsnapshot is ran. This Perl script stores each file MD5, in order to compare backup files structures between rsnapshot updates. I'm pretty sure it's worthless to show this code here. I think that keeping an eye on what change on a server, for example, is a useful feature. So, I'm asking. @pauska Most of the time, I'm trying to search for an answer myself, which is not the case here. Thanks

    Read the article

  • How can I test linkable/executable files that require re-hosting or retargeting?

    - by hagubear
    Due to data protection, I cannot discuss fine details of the work itself so apologies PROBLEM CASE Sometimes my software projects require merging/integration with third party (customer or other suppliers) software. these software are often in linkable executables or object code (requires that my source code is retargeted and linked with it). When I get the executables or object code, I cannot validate its operation fully without integrating it with my system. My initial idea is that executables are not meant to be unit tested, they are meant to be linkable with other system, but what is the guarantee that post-linkage and integration behaviour will be okay? There is also no sufficient documentation available (from the customer) to indicate how to go about integrating the executables or object files. I know this is philosophical question, but apparently not enough research could be found at this moment to conclude to a solution. I was hoping that people could help me go to the right direction by suggesting approaches. To start, I have found out that Avionics OEM software is often rehosted and retargeted by third parties e.g. simulator makers. I wonder how they test them. Surely, the source code will not be supplied due to IPR rgulations. UPDATE I have received reasonable and very useful suggestions regarding this area. My current struggle has shifted into testing 3rd party OBJECT code that needs to be linked with my own source code (retargeted) on my host machine. How can I even test object code? Surely, I need to link them first to even think about doing anything. Is it the post-link behaviour that needs to be determined and scripted (using perl,Tcl, etc.) so that inputs and outputs could be verified? No clue!! :( thanks,

    Read the article

  • Motivation for service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such case. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - well, why not just import the BL.dll (and the DAL.dll) to the other UI, and whenever making a change re-copy the dll files, it might not be so 'neat', but is the all purpose of the service layer to prevent this? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • Which version management design methodology to be used in a Dependent System nodes?

    - by actiononmail
    This is my first question so please indicate if my question is too vague and not understandable. My question is more related to High Level Design. We have a system (specifically an ATCA Chassis) configured in a Star Topology, having Master Node (MN) and other sub-ordinate nodes(SN). All nodes are connected via Ethernet and shall run on Linux OS with other proprietary applications. I have to build a recovery Framework Design so that any software entity, whether its Linux, Ramdisk or application can be rollback to previous good versions if something bad happens. Thus I think of maintaining a State Version Matrix over MN, where each State(1,2....n) represents Good Kernel, Ramdisk and application versions for each SN. It may happen that one SN version can dependent on other SN's version. Please see following diagram:- So I am in dilemma whether to use Package Management Methodology used by Debian Distributions (Like Ubuntu) or GIT repository methodology; in order to do a Rollback to previous good versions on either one SN or on all the dependent SNs. The method should also be easier for upgrading SNs along with MNs. Some of the features which I am trying to achieve:- 1) Upgrade of even single software entity is achievable without hindering others. 2) Dependency checks must be done before applying rollback or upgrade on each of the SN 3) User Prompt should be given in case dependency fails.If User still go for rollback, all the SNs should get notification to rollback there own releases (if required). 4) The binaries should be distributed on SNs accordingly so that recovery process is faster; rather fetching every time from MN. 5) Release Patches from developer for bug fixes, feature enhancement can be applied on running system. 6) Each version can be easily tracked and distinguishable. Thanks

    Read the article

  • Need guidelines for studying Game Development

    - by ShutterBug
    Hello Everyone, I've completed my graduation in Computer Science and currently working as a Software Engineer in a software company. I was wondering if I can build my career in Game Development. If so, what should be my approach. I've a few questions: Which universities to apply for masters? Preferably in Canada. Scholarships available? How shall I prepare myself before applying which shall give me an edge or advantage over others? I know Java, C#, PHP etc. I dont think these languages will be needed in Game Development. In that case, what languages shall I focus on from now? How do I get some ideas about IDE/Engines/Platform of game development? I'm not talking about flash/browser games. Please suggest me anything you want as I don't know much about it so I'm most likely to miss the most important questions. Feel free to make this thread a starter guide for those interested in perusing their career in game development. Post every relevant information. Thanks in Advance. EDIT: I can see a lot of people suggested to build a small project/game. If so, please suggest me how do I start a small game developing (maybe a clone to some existing small games ie pacman, brick game etc) from start to end.

    Read the article

  • Using FlashCache on /var on startup

    - by WinkyWolly
    I would like to have FlashCache cache my /var partition however I can't seem to get it to play nice upon bootup (IE: not really sure how to do it). I'm not sure if I need to modify the initramfs/use DKMS or if I can do it in user-land during bootup. The issue I'm running into is /var mounts early and therefore the device is busy (whether it by generally by syslogd). I'm positive this can be resolved by modifying the initramfs although I simply haven't fiddled with it enough to get it working. They have instructions on how to boot your root partition however I'm not sure if these instructions would apply to my use case. Any help / pointers in the right direction would be absolutely splendid.

    Read the article

  • Should I go with Java or Python for my next project, after using PHP for 5 years? [closed]

    - by vim
    I have a full-time PHP job and I've been working with PHP for 5 years. I'm not willing to stay within this technology stack any more. I also worked with Java for 2 year before, so for me it looks more obvious to switch back to Java. However during last 5 years I was thinking about starting my own project, and now I think I have a very good SAAS idea. I'm completely confused what technology should I use for my project. I don't want to do it in PHP, and after reading many articles about rapid prototype development it seems to me that Django is the best option. I will continue to work full time for my current employer because I need to pay my bills and will work on my project in my free time. The concern I have is should I do my project in Java or Python? To be realistic there is always a risk when you are doing your own project/start-up. If I do it in Java in the worst case scenario I believe I will be able to find a full time Java position because I already have some experience in Java + recent experience in my project. With regards to Python it looks like it is not very popular in my area and salaries are much more lower then for Java. On the other hand I have a feeling that if I chose Java it will take me a way longer to finish my project. Guys I'm completely confused and I need your advice. P.S. I have moved to London 2 years ago from another country, local guys are very welcome to share their thoughts about London's job market.

    Read the article

  • CUDA 4.1 Update

    - by N0xus
    I'm currently working on porting a particle system to update on the GPU via the use of CUDA. With CUDA, I've already passed over the required data I need to the GPU and allocated and copied the date via the host. When I build the project, it all runs fine, but when I run it, the project says I need to allocate my h_position pointer. This pointer is my host pointer and is meant to hold the data. I know I need to pass in the current particle position to the required cudaMemcpy call and they are currently stored in a list with a for loop being created and interated for each particle calling the following line of code: m_particleList[i].positionY = m_particleList[i].positionY - (m_particleList[i].velocity * frameTime * 0.001f); My current host side cuda code looks like this: float* h_position; // Your host pointer. This holds the data (I assume it's already filled with the data.) float* d_position; // Your device pointer, we will allocate and fill this float* d_velocity; float* d_time; int threads_per_block = 128; // You should play with this value int blocks = m_maxParticles/threads_per_block + ( (m_maxParticles%threads_per_block)?1:0 ); const int N = 10; size_t size = N * sizeof(float); cudaMalloc( (void**)&d_position, m_maxParticles * sizeof(float) ); cudaMemcpy( d_position, h_position, m_maxParticles * sizeof(float), cudaMemcpyHostToDevice); Both of which were / can be found inside my UpdateParticle() method. I had originally thought it would be a simple case of changing the h_position variable in the cudaMemcpy to m_particleList[i] but then I get the following error: no suitable conversion function from "ParticleSystemClass::ParticleType" to "const void *" exists I've probably messed up somewhere, but could someone please help fix the issues I'm facing. Everything else seems to running fine, it's just when I try to run the program that certain things hit the fan.

    Read the article

  • Extend university wifi network

    - by asfasdoiuh ouhouhouh
    i live in a university campus and i can get wifi signal on the outside of my window but not in the house. The solution i use at the moment is a usb wifi dongle outside connected to my laptop but the lack of an internal antenna make the connection quite unreliable at times. So i was trying to find another solution to improve the reception of my network. One idea is to setup a router on the outside (in a place with stronger signal) and redirect the connection inside the house with an ethernet cable but the problem is that our Uni Wifi is managed by a capitve portal (BlueSocket with DNS redirection to login page) and the authentication has to happen on the mac address that connect to the net (so the client appliance in this case). If I use a router with Mac-Clone capability i will be able to be redirected trough the captive portal on my laptop computer and login from there or i need to setup my router to fill in the login page by itself? There are other hardware/software solutions i can use to get what i want?

    Read the article

  • What kind of issues would occur if resolve.conf had no dns servers set?

    - by Stuart Woodward
    I want to create a server for a customer and have that customer finish the configuration for themselves. It was been decided that rather than setting default DNS servers (i.e. something like Google's) that the customer should enter the information by themselves. I assume that the customer is technically competent enough to do this. If however they forget or neglect to set this up they might spend some time trying to figure out what is wrong and eventually contact support. (In this case, I think that setting a default might have been better.) Apart from the obvious inability to resolve hosts, what other issues might they face until they have set valid dns servers in resolve.conf?

    Read the article

  • Amazon and bandwidth limits

    - by Dave
    This question may sound weird to some of you but I have never really used cloud and above all I'm still beginner in the web development and would be really thankful if someone could answer few of my question though they may sound weird So I would like to deploy simple website to Amazon, however, I'm concerned about bandwidth as they charge 0.12GB and I'm not able to set budget limit. My problem is that I wouldn't like to pay for 1000GB of bandwidth if someone for some reason decides to download one file constantly So could some of you, who have experience with amazon, tell what happens if my app is able to handle (say 50 req/sec 30kb/page) does that mean that in the worst case I would have to pay req * sec * min * hours * days * page size 50 * 60 * 60 * 24 * 30 * 30kb = 3888GB

    Read the article

  • Massive 404 attack with non existent URLs. How to prevent this?

    - by tattvamasi
    The problem is a whole load of 404 errors, as reported by Google Webmaster Tools, with pages and queries that have never been there. One of them is viewtopic.php, and I've also noticed a scary number of attempts to check if the site is a WordPress site (wp_admin) and for the cPanel login. I block TRACE already, and the server is equipped with some defense against scanning/hacking. However, this doesn't seem to stop. The referrer is, according to Google Webmaster, totally.me. I have looked for a solution to stop this, because it isn't certainly good for the poor real actual users, let alone the SEO concerns. I am using the Perishable Press mini black list (found here), a standard referrer blocker (for porn, herbal, casino sites), and even some software to protect the site (XSS blocking, SQL injection, etc). The server is using other measures as well, so one would assume that the site is safe (hopefully), but it isn't ending. Does anybody else have the same problem, or am I the only one seeing this? Is it what I think, i.e., some sort of attack? Is there a way to fix it, or better, prevent this useless resource waste? EDIT I've never used the question to thank for the answers, and hope this can be done. Thank you all for your insightful replies, which helped me to find my way out of this. I have followed everyone's suggestions and implemented the following: a honeypot a script that listens to suspect urls in the 404 page and sends me an email with user agent/ip, while returning a standard 404 header a script that rewards legitimate users, in the same 404 custom page, in case they end up clicking on one of those urls. In less than 24 hours I have been able to isolate some suspect IPs, all listed in Spamhaus. All the IPs logged so far belong to spam VPS hosting companies. Thank you all again, I would have accepted all answers if I could.

    Read the article

  • Some search keywords in the omnibox have stopped working

    - by pinouchon
    My problem in the same as described here: http://productforums.google.com/forum/#!topic/chrome/HE7ak4y92YU When I type a search in the chrome omnibox, after typing enfr [space] chrome replaces it by a search in the history and I end up googling my search prefixed by the keyword : for example, I end up googling "enfr former", instead of translating "former" from engligh into french. (in my case, the search is the folowing name: enfr, keywoard: enfr, search: http://translate.google.fr/#en/fr/%s in the other search engines options) Chrome version: 21.0.1180.89 How do I restore the normal behavior (searching on the specified search engine instead of the default search) ? Update: I have seen that clearing browsing data while I did the search fixes the problem. But I am not sure if it wont reapear.

    Read the article

  • Files copying between servers by creation time

    - by driftux
    My bash scripting knowledge is very weak that's why I'm asking help here. What is the most effective bash script according to performance to find and copy files from one LINUX server to another using specifications described below. I need to get a bash script which finds only new files created in server A in directories with name "Z" between interval from 0 to 10 minutes ago. Then transfer them to server B. I think it can be done by formatting a query and executing it for each founded new file "scp /X/Y.../Z/file root@hostname:/X/Y.../Z/" If script finds no such remote path on server B it will continue copying second file which directory exists. File should be copied with permissions, group, owner and creation time. X/Y... are various directories path. I want setup a cron job to execute this script every 10 minutes. So the performance is very important in this case. Thank you.

    Read the article

  • Shut down by pressing the power button twice

    - by iHarry
    I have set my laptop's power button to shut down the computer when it is pressed (Windows 7 x86). But I often hit the button accidentally, and it's a rare case that I'm able to actually prevent it from shutting down. I don't want to disable it though. Is there a way so that I get a confirmation whether I really want to shut down the computer (just like in Mac OS X Lion, where a confirmation appears when the power button is pressed)?. Is there some way to bring the same to Windows 7 (and 8)?

    Read the article

  • Tuning WebServer Response -

    - by Vedran Wex Maricevic
    I have this sam e question on StackOverflow and I was advised to ask it here hoping for more information. Here is the question: I am in rather unfavorable situation. I have aspdotneststore front e-commerce application and search addon called VibeTrib. I dont have source code for both of those. Store that runs on StoreFront and VibeTrib has close to 250k products. Also we have lots of filters. I spoke to ViTrib reps, and they want extra money so they could optimize Queries that they use. Money they require is nto a big deal, but the problem is I dont trust them anymore. What we got is much different then wha is being advertised. To cut the long story short. I am runing the store on Amazon AWS now, and regardless of what DB (MsSQL 2012) server I set (I tried 32GB RAM monsters instances) it is slow. Ajax search uses Full Text search and it displays search keywords relatively fast, but once the search is performed ( to display all results) it is still slow.!!! There is something that I could to do accelerate the speed on my own end? I do have full control over EC2 Instance (Web server Server 2012 and IIS 8). Can I set IIS to step in for the search and cache some of it? I was hoping to cache at least some most common words. My best bet is IIS 8 :) Is there any help in my case? Thanks

    Read the article

  • Title: Better logging for cronjob output

    - by Stefan Lasiewski
    I am looking for a better way to log cronjobs. Most cronjobs tend to spam email or the console, get ignored, or create yet another logfile. In this case, I have a Nagios NSCA script which sends data to a central Nagios sever. This send_nsca script also prints a single status line to STDOUT, indicating success or failure. 0 * * * * root /usr/local/nagios/sbin/nsca_check_disk This emails the following message to root@localhost, which is then forwarded to my team of sysadmins. Spam. forwarded nsca_check_disk: 1 data packet(s) sent to host successfully. I'm looking for a log method which: Doesn't spam the messages to email or the console Don't create yet another krufty logfile which requires cleanup months or years later. Capture the log information somewhere, so it can be viewed later if desired. Works on most unixes Fits into an existing log infrastructure. Uses common syslog conventions like 'facility' Some of these are third party scripts, and don't always do logging internally.

    Read the article

  • Cloud hosting and single hardware point of failure?

    - by PeterB
    From talking to sales I thought Rackspace Cloud was running on a SAN and compute nodes (as VMWare's offerings do), only to find out it doesn't, so when the host server goes down for maintenance all cloud servers on the server go down (in our case for 2.5 hours). I understand Amazon EC2 also has this single-server point of failure. Which cloud hosting solutions don't rely on a single server? I've yet to find a list by architecture Is there a term that distinguishes between these types of 'cloud'? Is one of these 'grid computing' and the other 'virtualisation'? Can a SAN backed solution provide the same reliability as 2 mirrored cloud servers on (say) Rackspace Cloud? I am more familiar with the VMWare architecture and would like to understand the advantages and disadvantages of each approach. I understand the standard architecture is to have multiple cloud servers and mirrored data between them; until we need multiple database servers I'm wondering if a SAN/node hosting solution would provide the lack of downtime we need without the added complexity.

    Read the article

< Previous Page | 438 439 440 441 442 443 444 445 446 447 448 449  | Next Page >