Search Results

Search found 21511 results on 861 pages for 'appstore approval process'.

Page 218/861 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Implications of renaming sql server 2000 instance

    - by peg_leg
    I have a sql server 2000 of which I want to replicate the data to a sql server 2008. However the output from select @@servername and select serverproperty('servername') on the 2000 server are different and that prevents replication. There is a process to resolve this at http://support.microsoft.com/kb/818334 . Has anyone done this? What implications are there for following this process? Any 'gotchas' that need to be guarded against? My 2000 server is a lone production server...very scary. Please advise.

    Read the article

  • Good set of web hosting permissions?

    - by Jorge Israel Peña
    Hey guys, I just got a linode and I'm in the process of configuring it. It's running nginx with php-fpm and passenger. nginx was compiled and is running as user nginx. php-fpm (php with fastcgi process manager) is running as www-data (in group www-data). My sites are currently in /var/www, so for example /var/www/test.com I'm just wondering what the general 'flow' of things is. So for example, /var/www is owned by root, should I chown of /var/www/test.com to nginx or www-data? Or should I put nginx in the www-data group? How should site uploading work, I just transfer files to the /var/www/test.com directory as root (sudo) and then chown -R www-data:www-data .? Thanks. I'm capable of figuring things out on my own, I'm just wondering what the typical/general way of handling users/groups/permissions/site-files is on linux with a webserver.

    Read the article

  • Media center consumes all available memory when attempting to play music off of a server

    - by RCIX
    I have Windows 7 Ultimate, and recently, when i try to play a song off of my Twonky Media Server/Windows Media Connect (based on an HP WHS with an Atom), it plays choppily. When i open Resource Monitor, it shows that after ordering the music to play, memory usage rapidly spikes to consume most, if not all, of the available memory on my system (excluding a couple hundred megabytes in standby). Why does it do this and is there anything i can do to stop it? Edit: it happens when I attempt to browse the server's music, not just when i play music. Edit 2: the "ehshell" process is what consumes the memory, appears to me something specific to media center. Moreover, the ehshell process doesn't die in this case. Edit 3: It only happens when browsing my Twonky library, and not my Windows Media Connect.

    Read the article

  • Issue with Toshiba Satellite C50 A-1JM

    - by Nathan Hawkes
    I am having some odd problems with my Toshiba laptop. For the last two days, it has not loaded past the boot screen for hours, if at all. When I force the laptop off and switch it back on, it goes to the Preparing Automatic Repair process but will not complete the process and go to the desktop. However, this only happens when the battery is plugged in. When I remove the battery and run the laptop on power, it works without issue. Would this be a battery issue (given it works without the battery) or a hard drive issue (given it won't get past the boot screen)? If neither, what would you suggest as a solution?

    Read the article

  • SSAS Compare version 1.0 released

    - by Red Gate Software BI Tools Team
    We’re pleased to announce that SSAS Compare version 1.0 has been released as a free tool. Version 1.0 includes: Comparisons of live databases and XMLA or Analysis Services Project files MDX syntax diffs and highlighting Server comparisons Deployment wizard with summaries of scripted actions Bug fixes and engine and UI refinements We’ve tested it on as many cube configurations as we could find (not just good old AdventureWorks!), but we can’t provide support for free tools — so if you’re reliant on SSAS Compare for your cube deployment, use it at your own risk. See the user license agreement in the installer for more details. SSAS Compare’s come a long way from its humble beginnings as an internal tool first developed for Red Gate’s own BI developers. Today’s SSAS Compare is now much more stable — not to mention much easier to use — and something the team is proud to have released with Red Gate’s name on. Next: Deployment Manager We’re working on integrating SSAS Compare cube deployment with our new Deployment Manager tool, so you’ll be able to create cube deployment scripts and automate the deployment process, too.  We’re documenting the process in a white paper we’ll publish online in the next week. Thank you! Thanks to all the SSAS Compare users out there. Without your feedback, we could never have produced such a stable product so quickly. We hope you continue to find useful. See you in Deployment Manager!  

    Read the article

  • Online Media Daily: Oracle Takes Social Marketing Seriously

    - by Kathryn Perry
    In the article published on Nov 12, 2012 and titled "Oracle Integrates Social Marketing Into Enterprise To Gain Marketing Revs," Online Media Daily explores Oracle's approach to social marketing. The publication says that Oracle is focused on showing marketers how to integrate social data into corporate business processes and how to "socialize" the corporate world.The article goes on to state:"Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Enterprise software companies like Oracle, SAP, IBM, Salesforce and Microsoft have been slowly building up an expertise in social marketing to integrate the data into traditional enterprise resource planning, and customer relationship management tools into social marketing tools.   Read more: http://www.mediapost.com/publications/article/187096/oracle-integrates-social-marketing-into-enterprise.html#ixzz2CPMZ1w3DMeg Bear, VP of cloud social platform at Oracle, sees the integration with ERP systems as a differentiator for the company. Oracle Social Relationship Management launched last month. It integrates social data into traditional enterprise applications like Oracle Fusion Marketing, Oracle Fusion Sales Catalog, Oracle ATG Web Commerce and Oracle ERP."The post goes on to quote a Forrester analyst stating the following:""There's room for any process-driven application to run more efficiently, especially if they're socially enabled," said Rob Koplowitz, VP and principal analyst at Forrester Research. "It takes the human part of the process not generally captured today to provide better access to content, information and collective actions."Koplowitz said several acquisitions support Oracle's long-term vision: to layer social on top of other enterprise apps, like its ERP platform."With many great acquisitions under our belt and organically grown social tools, the market recognizes that Oracle is poised to seize the moment in socially enabled business apps.Continue reading the full article here.

    Read the article

  • Hidden Gems: Accelerating Oracle Data Integrator with SOA, Groovy, SDK, and XML

    - by Alex Kotopoulis
    On the last day of Oracle OpenWorld, we had a final advanced session on getting the most out of Oracle Data Integrator through the use of various advanced techniques. The primary way to improve your ODI processes is to choose the optimal knowledge modules for your load and take advantage of the optimized tools of your database, such as OracleDataPump and similar mechanisms in other databases. Knowledge modules also allow you to customize tasks, allowing you to codify best practices that are consistently applied by all integration developers. ODI SDK is another very powerful means to automate and speed up your integration development process. This allows you to automate Life Cycle Management, code comparison, repetitive code generation and change of your integration projects. The SDK is easily accessible through Java or scripting languages such as Groovy and Jython. Finally, all Oracle Data Integration products provide services that can be integrated into a larger Service Oriented Architecture. This moved data integration from an isolated environment into an agile part of a larger business process environment. All Oracle data integration products can play a part in thisracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle Data Integrator allows full control of its runtime sessions through web services, so that integration jobs can become part of business processes. Oracle Data Service Integrator provides a data virtualization layer over your distributed sources, allowing unified reading and updating for heterogeneous data without replicating and moving data. Oracle Enterprise Data Quality provides data quality services to cleanse and deduplicate your records through web services.

    Read the article

  • Command+tab, expose and many other functions stop working after screen sharing

    - by moshen
    When I'm away from my office mac I typically login via screen-sharing or VNC from home/another mac. Lately, after using screen sharing this way, my office mac will have several problems: Command+tab and Command+` no longer work Expose and other F-key functions no longer work ctrl+space to launch Google QSB no longer works Things I have tried to remedy this: Restarting the Finder process Restarting the Dock process Disabling the screensaver Unfortunately, the screensaver still runs... A connected issue? Deleting preference files for Dock/Finder/screensaver etc. The only thing that seems to work is a restart. I usually try and avoid that. System details: Macbook pro 13" OS v10.5.8

    Read the article

  • Modifying the initrd.img to run additional binaries in a PXE booted RHEL 6

    - by Charles Long
    I am trying to add additional automation to our existing RHEL 6 (or Oralce's implementation thereof) PXE install process by running a script in the %pre section of my kickstart config that call hpacucli, HP's raid device configuration binary. My approach has been to modify the PXE served initrd.img. I've unpacked the initrd.img and copied in the required libraries, binaries, and scripts but when the system boots using the modified initrd.img, the modified /lib (and /lib_64) are moved aside to /lib_old and /lib is linked to the /mnt/runtime/lib. How can I change this configuration so that the /lib is not moved (unlikely) or required libraries are available in the runtime /mnt/runtime/lib? To test and confirm this I've been able to get the install process to move to the 6th virtual console, which allows me to see errors, and then open a shell (a useful debugging mechanism). %pre exec /dev/tty6 2 /dev/tty6 chvt 6 /bin/sh

    Read the article

  • What is an effective way to convert a shared memory-mapped system to another data access model?

    - by Rob Jones
    I have a code base that is designed around shared memory. Each process that needs to access the memory maps it into its own address space. The data structures in the shared memory are directly accessed, that is, there is no API. For example: Assume the following: typedef struct { int x; int y; struct { int a; int b; } z; } myStruct; myStruct s; Then a process might access this structure as: myStruct *s = mapGlobalMem(); And use it as: int tmpX = s->x; The majority of the information in the global structure is configuration information that is set once and read many times. I would like to store this information in a database and develop an API to access the database. The problem is, these references are sprinkled throughout the code. I need a way to parse the code and identify global structure references that will need to be refactored. I've looked into using ANTLR to create a parser that will identify references to a small set of structures and enter them into a custom symbol table. I could then use this symbol table to identify which source files need to be refactored. It looks like a promising approach. What other approaches are there? Of course, I'm looking for a programmatic approach. There are far too many source files to examine each one visually. This is all ordinary ANSI C. Nothing else.

    Read the article

  • failure daemon and changing pid number

    - by Alessandra Bilardi
    proftpd, sshd and apache processes run with /etc/init.d/its-script on linux distro. I was monitoring 21, 22 and 80 ports with farm monitoring service: every 5 minutes service check each port and notify only failure. The failures were 5-6 times on 24h. It seems that someone kicks the switch sometimes.. I add monit and collectd monitoring and the monitoring about 21, 22 and 80 ports is every 1 minute. I do not receive farm monitoring service notify. I receive only monit notify about failure and/or succeed/changing pid number of proftpd, sshd or apache process. The failures are still 5-6 times on 24h. collectd monitoing about cpu, load average and each process is regular and there are no peaks. There is nothing kicks the switch but there is something which determines failure monitoring. is it a simple interference or is it indicative of some abnormality? What could cause these failures?

    Read the article

  • Where can I find ready to use windows scripts that used robocopy?

    - by Geo
    We are installing the Windows Resource Kit, and that installs RoboCopy. We want to have access to a few windows scripts that uses RoboCopy so we can start from those to build something else. Any ideas on where I can find a few samples? NOTE 1: A bit of information. Every time we try to copy D drive to E drive (new drive) we get an error that says: ERROR 32 (0x000000020) Copying File d:\pagefile.sys The process cannot access the file because it is being used by another process. Waiting 30 seconds. Just to help figure it out.

    Read the article

  • Windows 8 mail cause event 529 when connect to exchange

    - by holian
    I set my company exchange mailbox in Windows 8.1 mail. (outsite). Everything works fine, but after i start the Windows 8.1 mail i get event with id 529 in the security log continously. Reason: Unknown user name or bad password Username: [email protected] range: Type of login: 8 Logon Process: Advapi Authentication Package: Negotiate Workstation Name: SERVERNAME Caller User Name: SERVERNAME $ Calling range BAR NUL Caller Logon ID: (0x0, 0x3E7) Caller Process ID: 4384 Transmitted services: - Source Network Address: 56.43.213.122 Source Port: 55 698 If i close windows mail, events stop flooding the security log in the server. Connection parameters in windwos 8: email:[email protected] password domain:company.local username:myemail server:mydomain.dyndns.org SSL:yes. Any idea whats the problem? I can check my mail, with the same setting on my android phone without any problem. Thank you

    Read the article

  • Should I use mod_wsgi embedded mode if I have full control of Apache?

    - by mgibsonbr
    I'm managing a bunch of sites and applications in a shared hosting, using Django via mod_wsgi. I had planned to use daemon mode from the beginning (to avoid restart problems), but ended up purchasing a plan that allows me to run a dedicated Apache instance. I kept using daemon mode for convenience, but I'm afraid it's consuming more server resources than it should (I have different projects for each site, each with its own process and process group), so I'm considering switching to embedded mode. Would that be a sensible thing to do? I'd still be able to restart Apache anytime I need to, and I wouldn't need so many child processes and sockets (so I hope the resource usage would decrease). But I'm unsure whether or not doing so would make it more difficult to manage those sites (if I need to update one, I have to restart all) or maybe the applications won't be properly isolated from one another. Are these problems really significant (or only a minor nuisance), are there other drawbacks I coudn't foresee? I'm looking for advice in any aspect of this setup - mainainability, performance, security etc. Tips for improving the current setup are also welcome (I know how to correctly configure a basic mod_wsgi setup, but I'm clueless about sensible values for threads, processes etc).

    Read the article

  • Large file copy from NFS to local disk performance drop

    - by Bernhard
    I'm trying to copy a 200GB file from an NFS mount to a local disk. The local disk is an XFS filesystem on a LVM on top of a RAID 5 system (hardware RAID controler). I'm using rsync to monitor the transfer speed. At the beginning the IO speed is about 200MB/s, stable for the first 18GB. But then the performance drops by a factor of 10-20 and never recovers to the initial rate. Sometimes it reaches about 50-100MB/s but just for a few seconds and then the process seems to hang for a bit. At the same time all file-stat operations on the target filesystem are blocking for a long time (minutes). Also interrupting the copy process blocks for several minutes, a sub-sequent delete of the partly copied file takes also several minutes. Any ideas what could be causing this?

    Read the article

  • Do large folder sizes slow down IO performance?

    - by Aaron
    We have a Linux server process that writes a few thousand files to a directory, deletes the files, and then writes a few thousand more files to the same directory without deleting the directory. What I'm starting to see is that the process doing the writing is getting slower and slower. My question is this: The directory size of the folder has grown from 4096 to over 200000 as see by this output of ls -l. root@ad57rs0b# ls -l 15000PN5AIA3I6_B total 232 drwxr-xr-x 2 chef chef 233472 May 30 21:35 barcodes On ext3, can these large directory sizes slow down performance? Thanks. Aaron

    Read the article

  • Writing Acceptance test cases

    - by HH_
    We are integrating a testing process in our SCRUM process. My new role is to write acceptance tests of our web applications in order to automate them later. I have read a lot about how tests cases should be written, but none gave me practical advices to write test cases for complex web applications, and instead they threw conflicting principles that I found hard to apply: Test cases should be short: Take the example of a CMS. Short test cases are easy to maintain and to identify the inputs and outputs. But what if I want to test a long series of operations (eg. adding a document, sending a notification to another user, the other user replies, the document changes state, the user gets a notice). It rather seems to me that test cases should represent complete scenarios. But I can see how this will produce overtly complex test documents. Tests should identify inputs and outputs:: What if I have a long form with many interacting fields, with different behaviors. Do I write one test for everything, or one for each? Test cases should be independent: But how can I apply that if testing the upload operation requires that the connect operation is successful? And how does it apply to writing test cases? Should I write a test for each operation, but each test declares its dependencies, or should I rewrite the whole scenario for each test? Test cases should be lightly-documented: This principles is specific to Agile projects. So do you have any advice on how to implement this principle? Although I thought that writing acceptance test cases was going to be simple, I found myself overwhelmed by every decision I had to make (FYI: I am a developer and not a professional tester). So my main question is: What steps or advices do you have in order to write maintainable acceptance test cases for complex applications. Thank you.

    Read the article

  • Empty sshd_config file

    - by Thomas
    I run a Centos 5 server with a LAMP stack. I was told this morning that the server was down not serving web content. I then tried to restart httpd but it failed due to another process was listening on port 443. I checked what process was running on 443 using netstat and it was sshd. I then checked the sshd_config file to check the ports that sshd was running on but the sshd_config file was completely blank. I than ran chkrootkit and it flagged not suspicions. What could of caused the sshd_config file to be blank, and sshd service to be restarted? I would really value your thoughts. All the best.

    Read the article

  • Track sales and commission with third-party tool

    - by Andrew
    I have a clothing website where I link to various clothing retailers. I have reached an agreement with one of the retailers whereby they will pay a commission to us for every sale they make from traffic that was referred by our site. I need a mechanism for tracking how much commission should be paid to us, that involves as little work as possible to implement from their side. We both have Google Analytics. Option 1: They record a goal in their GA account whenever someone makes a purchase on their site. They see how many completed goals are marked as referral traffic from our site and calculate commission accordingly. The problem with this is that the whole process of calculating and paying commission will be manual. They will need to frequently check how many sales were generated by referral traffic from our site, and probably we will have to chase them for commission payments. Also - since we won't have access to their GA data - we will need to trust that they report all sales accurately. Option 2: Sign them up to an affiliate network like Commission Junction or Google's Affiliate Network, and connect to them through this network. The problem with this solution is that it seems too heavyweight; ideally we don't want to ask a retailer to go through the whole sign up process just to deal with us and pay us commission. I am assuming that there must be some lightweight service that tracks the number of sales by one site and pays commission accordingly to the other site, where the sign up and installation procedure is simple and fast.

    Read the article

  • NT4 server generate too much weird DNS queries, How can i see the source PID?

    - by Hanan N.
    I have a NT4 server that in the last two weeks started to generate too many weird DNS queries to the DNS server is set to use. I have got warnings from the IPS system that it has blocked the responses from the DNS server back to the NT4 server. The queries it generate doesn't relate to any computer in the network, it is like 120624100088.xxxxxxx.net where xxx is the internal network, the numbers are just random at each query. I have done some research on how to get the PID that is generating the queries, and i found that only Process Monitor could give me that information, but since it is NT4 system Process Monitor doesn't work on it. It is a production server and i am just can't stop services as i want. I would like to get your advice on how can i get the PID that is generating these queries? Thanks.

    Read the article

  • PHP Script causing high CPU

    - by user20996
    I have a website which is causing me a major headache. There is one PHP script which is using far to much CPU, this only seem to happen when bots hit the site, I dont want to block all the bots because we need them. I have the the process manager output: Pid Owner Priority CPU % Memory % Command 16943 (Trace) (Kill) specialone 0 99.4 1.0 /usr/bin/php /var/www/specialone/page.php I ran strace -p 16943 on the process but it comes up with nothing. We have 2GB of RAM and the php memory_limit is set to 128M which should be enough. The trouble I have is the PHP code is a Framework and the culprit page.php pulls in a lot of other PHP files so I cant debug the PHP. Is there any way of finding out what the script is doing when its using so much CPU which will help me solve it?

    Read the article

  • Is there a RAR extractor (for multiple rar files like .r00 etc.) that will use all of my quad cores?

    - by Christopher Done
    I've got a quad core Intel processor. I've got a big file split into little ones as RAR files, foo.r00, foo.r01, etc. which the RAR program extracts into one file/directory. Is there a RAR program that I can specify like "use four cores" in the extract process? At the moment it sits there using 100% of one core. I recognise the bottleneck might be my hard drive anyway, but I don't see a lot of HD usage and suspect the decompression process is more intensive than waiting on I/O. For example, GNU Make accepts a (-j, I think) argument to tell it how many cores to use, which I used to compile PHP 6 really quickly.

    Read the article

  • File copying software to do this kind of work... in Windows 7 32 bit

    - by Senthil
    I need a software (Windows 7 32bit) to help me in this process: I have my documents, music, video clips, movies, pictures in my hard disk. These will not be scattered around the system. But will be inside C:\Senthil\ At the end of every week, I want to plug in an external hard disk and run a software that should make sure whatever is inside C:\Senthil\ is also present in the external disk. Files deleted from C:\Senthil\ should be deleted there, and new files should be copied etc... at the end of the process, every bit inside the source folder in my internal disk should be inside my external disk. A couple of important requirements and points: I do NOT need multiple versions or historic versions. I don't need the previous versions of my files. I only want the latest copy to be present in my "backup". Incremental backup makes sense. If files were not touched since the last backup, it need not copy. The size of my folder will run into GBs and in a year or two will go into TBs. But I will make sure the size of the external HDD is equal to or bigger than my source folder. I do not want it to run automatically because when I accidentally delete a file in my source, it will delete the one in the backup (I know this is why we have versioning facilities). I just want to be able to run it manually so that I am in control of when the backup is made and what is backed up and I should be able to pick something from the backup and restore it to the source folder in the above situation. Is there any software that will let me do exactly this? I don't want any other "smart" facility of the software to interfere with this process. I know what I want and the software can keep its smartness to itself :D The main reason I am asking this question is, I am a software developer and I can write this software myself. But I am a little constrained by time at the moment and I want to know if there is an existing program that can do this. Kindly don't worry about earthquakes or fire or snowstorms and bring up the "in case of a natural disaster your backup will also be in the damage zone and will be lost" argument because: I will have bigger things to worry about than my holiday memories. I don't think I will digitally store any life-ruining documents. This backup is only to avoid the inconvenience of obtaining a new copy of stuff that I have. Not to protect them against the end of the world. I am more worried about power surges in my area frying my system, hard disk failure, children who merrily hit Delete or teens who hit Shift + Delete or myself getting a little careless at times! In short: Is there a file/folder syncing software that listens to what I say and doesn't try to act smart? Please forgive me if I sound arrogant :)

    Read the article

  • PHP hits 100% CPU and eats RAM at the same time Monday to Friday

    - by Daniel Samuels
    We run a learning platform for primary schools here in the UK and it's all been running extremely well. However at around 4PM Monday to Friday we see the same issue arise -- 1-2 PHP threads will spike to 100% CPU and gradually start eating up RAM until the server(s) fall over. 98%+ of our requests are HTTPS, these come into our Layer 7 load balancer which then decrypts the SSL data, adds the X-HTTP-Forwarded-For header and forwards the data onto an application server (we have 2 of those at the moment) on port 80. Our application servers have Varnish on port 80 which takes in the request from the load balancer and passes the request through to Nginx on port 81. Nginx then works out which 'vhost' it needs to use and passes any PHP processing through to PHP-CGI which is listening on a socket (managed through spawn-fcgi). There's an instance of Memcached running too, MySQL runs on a separate server / slave setup. Throughout the day the load will typically go no higher than 0.8 on either of the application servers, however at around 4PM our problem arises. I've managed to run strace on a few of the actual threads when they cause the problem and I always see the same thing: stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 stat("/usr/share/zoneinfo/Europe/London", {st_mode=S_IFREG|0644,st_size=3661, ...}) = 0 This is repeated infinitely and never stops until you SEGKILL the process or oomkiller kills it. There are no cron jobs scheduled to run at that time and I don't have any way of seeing exactly what Nginx request is associated with the PHP process which is running. We are running PHP 5.3.14 which we upgraded to from 5.3.8 last week to rule out the older version being the problem. This issue has been going on a few months now and we have no idea what is causing it. We deploy our software very frequently, so it's difficult to track down a specific release which may have started the problem - especially as we do not know the date of the first occurrence of this issue. Varnish is version 3.0.1, Nginx is 1.0.6 (which I understand is about a year old now), our servers are running CentOS release 5.7 (Final) they have Intel i3 540s at 3.07Ghz and 8GB of RAM. There's a discussion on the Debian mailing list about something very similar, you can find that here. Has anyone seen anything like this in the past, does anyone have any ideas or suggestions? Are there a way of linking an Nginx request directly to a PHP thread? Is there a better way of seeing what the PHP process is doing? (I've seen GDB mentioned, though I'll have to recompile PHP) Thanks!

    Read the article

  • Office 2010 always reconfiguring itself on startup

    - by Rhys Gibson
    I've just installed Office 2010 Professional Plus (upgrading from Office 2007). It works fine under my admin account, but when I login with my wifes non-admin account, every time I open a document or start an app (Word, Excel, Publisher ...) Office 2010 goes through its configuration process (starting the the standard install dialog and then running the bootstrap process) before it loads the app - which wastes 2-3 minutes. Once it's done this, the app runs fine and I can make setting changes that are remembered when it restarts, but I can't work out why it thinks it needs to configure the app each time. Any thoughts?

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >