Search Results

Search found 24382 results on 976 pages for 'tutor process procedure f'.

Page 203/976 | < Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >

  • Database Owner Conundrum

    - by Johnm
    Have you ever restored a database from a production environment on Server A into a development environment on Server B and had some items, such as Service Broker, mysteriously cease functioning? You might want to consider reviewing the database owner property of the database. The Scenario Recently, I was developing some messaging functionality that utilized the Service Broker feature of SQL Server in a development environment. Within the instance of the development environment resided two databases: One was a restored version of a production database that we will call "RestoreDB". The second database was a brand new database that has yet to exist in the production environment that we will call "DevDB". The goal is to setup a communication path between RestoreDB and DevDB that will later be implemented into the production database. After implementing all of the Service Broker objects that are required to communicate within a database as well as between two databases on the same instance I found myself a bit confounded. My testing was showing that the communication was successful when it was occurring internally within DevDB; but the communication between RestoreDB and DevDB did not appear to be working. Profiler to the rescue After carefully reviewing my code for any misspellings, missing commas or any other minor items that might be a syntactical cause of failure, I decided to launch Profiler to aid in the troubleshooting. After simulating the cross database messaging, I noticed the following error appearing in Profiler: An exception occurred while enqueueing a message in the target queue. Error: 33009, State: 2. The database owner SID recorded in the master database differs from the database owner SID recorded in database '[Database Name Here]'. You should correct this situation by resetting the owner of database '[Database Name Here]' using the ALTER AUTHORIZATION statement. Now, this error message is a helpful one. Not only does it identify the issue in plain language, it also provides a potential solution. An execution of the following query that utilizes the catalog view sys.transmission_queue revealed the same error message for each communication attempt: SELECT     * FROM        sys.transmission_queue; Seeing the situation as a learning opportunity I dove a bit deeper. Reviewing the database properties  The owner of a specific database can be easily viewed by right-clicking the database in SQL Server Management Studio and selecting the "properties" option. The owner is listed on the "General" page of the properties screen. In my scenario, the database in the production server was created by Frank the DBA; therefore his server login appeared as the owner: "ServerName\Frank". While this is interesting information, it certainly doesn't tell me much in regard to the SID (security identifier) and its existence, or lack thereof, in the master database as the error suggested. I pulled together the following query to gather more interesting information: SELECT     a.name     , a.owner_sid     , b.sid     , b.name     , b.type_desc FROM        master.sys.databases a     LEFT OUTER JOIN master.sys.server_principals b         ON a.owner_sid = b.sid WHERE     a.name not in ('master','tempdb','model','msdb'); This query also helped identify how many other user databases in the instance were experiencing the same issue. In this scenario, I saw that there were no matching SIDs in server_principals to the owner SID for my database. What login should be used as the database owner instead of Frank's? The system stored procedure sp_helplogins will provide a list of the valid logins that can be used. Here is an example of its use, revealing all available logins: EXEC sp_helplogins;  Fixing a hole The error message stated that the recommended solution was to execute the ALTER AUTHORIZATION statement. The full statement for this scenario would appear as follows: ALTER AUTHORIZATION ON DATABASE:: [Database Name Here] TO [Login Name]; Another option is to execute the following statement using the sp_changedbowner system stored procedure; but please keep in mind that this stored procedure has been deprecated and will likely disappear in future versions of SQL Server: EXEC dbo.sp_changedbowner @loginname = [Login Name]; .And They Lived Happily Ever After Upon changing the database owner to an existing login and simulating the inner and cross database messaging the errors have ceased. More importantly, all messages sent through this feature now successfully complete their journey. I have added the ownership change to my restoration script for the development environment.

    Read the article

  • Using boost::asio::async_read with stdin?

    - by yeus
    hi poeple.. short question: I have a realtime-simulation which is running as a backround process and is connected with pipes to the calling pogramm. I want to send commands to that process using stdin to get certain information from it via stdout. Now because it is a real-time process, it has to be a non blocking input. Is boost::asio::async_read in conjunction with iostream::cin a good idea for this task? how would I use that function if it is feasible? Any more suggestions?

    Read the article

  • segfault during fclose()

    - by Hristo
    fclose() is causing a segfault. I have : char buffer[L_tmpnam]; char *pipeName = tmpnam(buffer); FILE *pipeFD = fopen(pipeName, "w"); // open for writing ... ... ... fclose(pipeFD); I don't do any file related stuff in the ... yet so that doesn't affect it. However, my MAIN process communicates with another process through shared memory where pipeName is stored; the other process fopen's this pipe for reading to communicated with MAIN. Any ideas why this is causing a segfault? Thanks, Hristo

    Read the article

  • How should I deploy a patch to a Passenger-based production Rails application without downtime?

    - by Olly
    I have a Passenger-based production Rails application which has thousands of users. Occasionally we need to apply a code patch (we use git) and the current process for doing this (you can assume there are no data migrations) is: Perform git pull origin [production-branch-name] on the server touch tmp/restart.txt to restart Passenger This allows us to patch the server without having to resort to putting up a maintenance page, which is great, but it doesn't feel quite right since it's not actually a proper 'deployment', and we still need to manually update the revision file and our deployment doesn't appear in the Hoptoad or NewRelic services we use. Ideally I would run cap production deploy and just let the standard Capistrano deployment script take care of everything, but is this a dangerous thing to do without putting up a maintenance page? This deployment process seems to be fairly safe in that the new revision is deployed to a completely separate folder and only right at the end of the process is a symlink re-created to switch the currently deployed version, but I'm still fairly paranoid about this somehow resulting in a lost or failed request.

    Read the article

  • jgraph like component on vaadin framework

    - by vsp
    Hi, Currently I am working on a process manager kind application, which needs jgraph( http://www.jgraph.com ) kind of component. The process manager is a kind of process flow chart. I am currently using vaadin framework. Is there any such built in components available in vaadin or can i have any plugin to create a drag & drop graph for my app. Please let me know if any one has already done this. Thanks.

    Read the article

  • Image processing in a multhithreaded mode using Java

    - by jadaaih
    Hi Folks, I am supposed to process images in a multithreaded mode using Java. I may having varying number of images where as my number of threads are fixed. I have to process all the images using the fixed set of threads. I am just stuck up on how to do it, I had a look ThreadExecutor and BlockingQueues etc...I am still not clear. What I am doing is, - Get the images and add them in a LinkedBlockingQueue which has runnable code of the image processor. - Create a threadpoolexecutor for which one of the arguements is the LinkedBlockingQueue earlier. - Iterate through a for loop till the queue size and do a threadpoolexecutor.execute(linkedblockingqueue.poll). - all i see is it processes only 100 images which is the minimum thread size passed in LinkedBlockingQueue size. I see I am seriously wrong in my understanding somewhere, how do I process all the images in sets of 100(threads) until they are all done? Any examples or psuedocodes would be highly helpful Thanks! J

    Read the article

  • Google App Engine modifyThreadGroup problem

    - by Frank
    I'm using Google App Engine to process Paypal IPN messages, when my servlet starts I use the following lines to start another process to process massages : public class PayPal_Monitor_Servlet extends HttpServlet { PayPal_Message_To_License_File_Worker PayPal_message_to_license_file_worker; public void init(ServletConfig config) throws ServletException // Initializes the servlet. { super.init(config); PayPal_message_to_license_file_worker=new PayPal_Message_To_License_File_Worker(); } public void doGet(HttpServletRequest request,HttpServletResponse response) throws IOException { } ... } public class PayPal_Message_To_License_File_Worker implements Runnable { static Thread PayPal_Message_To_License_File_Thread; ... PayPal_Message_To_License_File_Worker() { start(); } void start() { if (PayPal_Message_To_License_File_Thread==null) { PayPal_Message_To_License_File_Thread=new Thread(this); PayPal_Message_To_License_File_Thread.setPriority(Thread.MIN_PRIORITY); PayPal_Message_To_License_File_Thread.start(); } ... } But "PayPal_Message_To_License_File_Thread=new Thread(this);" is causing the following error : javax.servlet.ServletContext log: unavailable java.security.AccessControlException: access denied (java.lang.RuntimePermission modifyThreadGroup) at java.security.AccessControlContext.checkPermission(AccessControlContext.java:355) at java.security.AccessController.checkPermission(AccessController.java:567) Why, how to fix it ? Frank

    Read the article

  • MySQL Insert Statement Queue

    - by Justin
    We are building an ajax application in which a users input is submitted for processing to a php script. We are currently writing every request to a log file for tracking. I would like to move this tracking into a database table but I do not want to run a insert statement after request. What I would like to do is set up a 'queue' of transactions (inserts and updates) that need to be processed on the MySQL database. I would then set up a cron job or process to check and process the transactions in the queue. Is there something out there that we could build upon or do we have to just write to plain ol' text log files and process them?

    Read the article

  • Detect a file is closed

    - by User1
    Is there a simple way to detect when a file for writing is closed? I have a process that kicks out a new file every 5 seconds. I can not change the code for that process. I am writing another process to analyze the contents of each file immediately when it is created. Is there a simple way to watch a certain directory to see when a file is no longer opened for writing? I'm hoping for some sort of Linux command.

    Read the article

  • What are some good books on software testing/quality?

    - by mjh2007
    I'm looking for a good book on software quality. It would be helpful if the book covered: The software development process (requirements, design, coding, testing, maintenance) Testing roles (who performs each step in the process) Testing methods (white box and black box) Testing levels (unit testing, integration testing, etc) Testing process (Agile, waterfall, spiral) Testing tools (simulators, fixtures, and reporting software) Testing of embedded systems The goal here is to find an easy to read book that summarizes the best practices for ensuring software quality in an embedded system. It seems most texts cover the testing of application software where it is simpler to generate automated test cases or run a debugger. A book that provided solutions for improving quality in a system where the tests must be performed manually and therefore minimized would be ideal.

    Read the article

  • Algorithmic trading software safety guards

    - by Adal
    I'm working on an automatic trading system. What sorts of safe-guards should I have in place? The main idea I have is to have multiple pieces checking each other. I will have a second independent little process which will also connect to the same trading account and monitor simple things, like ensuring the total net position does not go over a certain limit, or that there are no more than N orders in 10 minutes for example, or more than M positions open simultaneously. You can also check that the actual open positions correspond to what the strategy process thinks it actually holds. As a bonus I could run this checker process on a different machine/network provider. Besides the checks in the main strategy, this will ensure that whatever weird bug occurs, nothing really bad can happen. Any other things I should monitor and be aware of?

    Read the article

  • scheduled task or windows service

    - by czuroski
    Hello, I have to create an app that will read in some info from a db, process the data, write changes back to the db, and then send an email with these changes to some users or groups. I will be writing this in c#, and this process must be run once a week at a particular time. This will be running on a Windows 2008 Server. In the past, I would always go the route of creating a windows service with a timer and setting the time/day for it to be run in the app.config file so that it can be changed and only have to be restarted to catch the update. Recently, though, I have seen blog posts and such that recommend writing a console application and then using a scheduled task to execute it. I have read many posts talking to this very issue, but have not seen a definitive answer about which process is better. What do any of you think? Thanks for any thoughts.

    Read the article

  • Multiple .NET processes memory footprint

    - by mr.b
    I am developing an application suite that consists of several applications which user can run as needed (they can run all at the same time, or only several..). My concern is in physical memory footprint of each process, as shown in task manager. I am aware that Framework does memory management behind the curtains in terms that it devotes parts of memory for certain things that are not directly related to my application. The question. Does .NET Framework has some way of minimizing memory footprint of processes it runs when there are several processes running at the same time? (Noobish guess ahead) Like, if System.dll is already loaded by one process, does framework load it for each process, or it has some way of sharing it between processes? I am in no way striving to write as small (resource-wise) apps as possible (if I were, I probably wouldn't be using .NET Framework in the first place), but if there's something I can do something about over-using resources, I'd like to know about it.

    Read the article

  • Using Stored Procedures in SSIS

    - by dataintegration
    The SSIS Data Flow components: the source task and the destination task are the easiest way to transfer data in SSIS. Some data transactions do not fit this model, they are procedural tasks modeled as stored procedures. In this article we show how you can call stored procedures available in RSSBus ADO.NET Providers from SSIS. In this article we will use the CreateJob and the CreateBatch stored procedures available in RSSBus ADO.NET Provider for Salesforce, but the same steps can be used to call a stored procedure in any of our data providers. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow window. Step 3: Open the Data Flow Task and add a Script Component to the data flow pane. A dialog box will pop-up allowing you to select the Script Component Type: pick the source type as we will be outputting columns from our stored procedure. Step 4: Double click the Script Component to open the editor. Step 5: In the "Inputs and Outputs" settings, enter all the columns you want to output to the data flow. Ensure the correct data type has been set for each output. You can check the data type by selecting the output and then changing the "DataType" property from the property editor. In our example, we'll add the column JobID of type String. Step 6: Select the "Script" option in the left-hand pane and click the "Edit Script" button. This will open a new Visual Studio window with some boiler plate code in it. Step 7: In the CreateOutputRows() function you can add code that executes the stored procedures included with the Salesforce Component. In this example we will be using the CreateJob and CreateBatch stored procedures. You can find a list of the available stored procedures along with their inputs and outputs in the product help. //Configure the connection string to your credentials String connectionString = "Offline=False;user=myusername;password=mypassword;access token=mytoken;"; using (SalesforceConnection conn = new SalesforceConnection(connectionString)) { //Create the command to call the stored procedure CreateJob SalesforceCommand cmd = new SalesforceCommand("CreateJob", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SalesforceParameter("ObjectName", "Contact")); cmd.Parameters.Add(new SalesforceParameter("Action", "insert")); //Execute CreateJob //CreateBatch requires JobID as input so we store this value for later SalesforceDataReader rdr = cmd.ExecuteReader(); String JobID = ""; while (rdr.Read()) { JobID = (String)rdr["JobID"]; } //Create the command for CreateBatch, for this example we are adding two new rows SalesforceCommand batCmd = new SalesforceCommand("CreateBatch", conn); batCmd.CommandType = CommandType.StoredProcedure; batCmd.Parameters.Add(new SalesforceParameter("JobID", JobID)); batCmd.Parameters.Add(new SalesforceParameter("Aggregate", "<Contact><Row><FirstName>Bill</FirstName>" + "<LastName>White</LastName></Row><Row><FirstName>Bob</FirstName><LastName>Black</LastName></Row></Contact>")); //Execute CreateBatch SalesforceDataReader batRdr = batCmd.ExecuteReader(); } Step 7b: If you had specified output columns earlier, you can now add data into them using the UserComponent Output0Buffer. For example, we had set an output column called JobID of type String so now we can set a value for it. We will modify the DataReader that contains the output of CreateJob like so:. while (rdr.Read()) { Output0Buffer.AddRow(); JobID = (String)rdr["JobID"]; Output0Buffer.JobID = JobID; } Step 8: Note: You will need to modify the connection string to include your credentials. Also ensure that the System.Data.RSSBus.Salesforce assembly is referenced and include the following using statements to the top of the class: using System.Data; using System.Data.RSSBus.Salesforce; Step 9: Once you are done editing your script, save it, and close the window. Click OK in the Script Transformation window to go back to the main pane. Step 10: If had any outputs from the Script Component you can use them in your data flow. For example we will use a Flat File Destination. Configure the Flat File Destination to output the results to a file, and you should see the JobId in the file. Step 11: Your project should be ready to run.

    Read the article

  • DesignMode with Controls

    - by John Dyer
    Has anyone found a useful solution to the DesignMode problem when developing controls? The issue is that if you nest controls then DesignMode only works for the first level. The second and lower levels DesignMode will always return FALSE. The standard hack has been to look at the name of the process that is running and if it is "DevEnv.EXE" then it must be studio thus DesignMode is really TRUE. The problem with that is looking for the ProcessName works its way around through the registry and other strange parts with the end result that the user might not have the required rights to see the process name. In addition this strange route is very slow. So we have had to pile additional hacks to use a singleton and if an error is thrown when asking for the process name then assume that DesignMode is FALSE. A nice clean way to determine DesignMode is in order. Acually getting Microsoft to fix it internally to the framework would be even better!

    Read the article

  • How does a programmer think?

    - by Gordon Potter
    This may be a hopelessly vague question. But I am interested to hear whatever logical thought processes people go through when learning a new concept or trying to get their brain around code they might not have ever seen before. Basically, what general steps does one take to to break down problems and what does it take to "get it"? If you were to diagram a flowchart of how your mental process works when you look at code or try to solve a problem what might it look like? What common references, tips, and mental assumptions do you find useful in problem solving? How is this different between different domains? For example in what ways is a web programmer's thought process similar or different from a traditional desktop app developer's process?

    Read the article

  • High Throughput and Windows Workflow Foundation

    - by SometimesUseful
    Can WWF handle high throughput scenarios where several dozen records are 'actively' being processed in parallel at any one time? We want to build a workflow process which handles a few thousand records per hour. Each record takes up to a minute to process, because it makes external web service calls. We are testing Windows Workflow Foundation to do this. But our demo programs show processing of each record appear to be running in sequence not in parallel, when we use parallel activities to process several records at once within one workflow instance. Should we use multiple workflow instances or parallel activities? Are there any known patterns for high performance WWF processing?

    Read the article

  • Get application icon from ProcessSerialNumber

    - by Thomi
    I would like to get the application icon for all foreground applications running on my Mac. I'm already iterating over all applications using the Process Manager API. I have determined that any process that does not have the modeBackgroundOnly flag set in the processMode (as retrieved from GetProcessInformation()) is a "foreground" application, and shows up in the task switcher window. All I need is an API that gives me a CImageRef (or similar) that contains the application icon for a process. I'm free to use either carbon or cocoa APIs.

    Read the article

  • Graceful DOS Command Error-Handling w/PHP popen()

    - by Captain Obvious
    PHP 5.2.13 on Windows 2003 I am using the DOS Start /B command to launch a background application using the PHP popen() function: popen("start /B {$_SERVER['HOMEPATH']}/{$app}.exe > {$_SERVER['HOMEPATH']}/bg_output.log 2>&1 & echo $!", 'r'); The popen() function launches a cmd.exe process that runs the specified command; however, if the command fails (e.g. the {$app}.exe doesn't exist or is locked in the above example), the cmd.exe process never returns, and PHP hangs indefinitely as a result. Calling the failing DOS command directly using the Command Prompt results in an Error prompt that requires clicking the OK button. I assume this error confirmation requirement is what's preventing the cmd.exe process from returning to PHP both from the Command Prompt (using both CGI and CLI) and the web (using Apache 2.0 handler w/Apache 2.2). Is there a way to configure PHP, Apache, and/or Win 2003 to return the DOS error to the originating call rather than waiting for confirmation?

    Read the article

  • How are dll's loaded by the CLR?

    - by priehl
    My assumption was always that the CLR loaded all of the dlls it needed on startup of the app domain. However, I've written an example that makes me question this assumption. I start up my application and check to see how many modules are loaded. Process[] ObjModulesList; ProcessModuleCollection ObjModulesOrig; //Get all modules inside the process ObjModulesList = Process.GetProcessesByName("MyProcessName"); // Populate the module collection. ObjModulesOrig = ObjModulesList[0].Modules; Console.WriteLine(ObjModulesOrig.Count.ToString()); I then repeate the exact same code and my count is different. The additional dll is C:\WINNT\system32\version.dll. I'm really confused as to why the counts would be different. Could someone please elaborate on what the clr is doing and how it's loading these thing, and by what logic it's doing so?

    Read the article

  • What interprocess locking calls should I monitor?

    - by Matt Joiner
    I'm monitoring a process with strace/ltrace in the hope to find and intercept a call that checks, and potentially activates some kind of globally shared lock. While I've dealt with and read about several forms of interprocess locking on Linux before, I'm drawing a blank on what to calls to look for. Currently my only suspect is futex() which comes up very early on in the process' execution. Update0 There is some confusion about what I'm after. I'm monitoring an existing process for calls to persistent interprocess memory or equivalent. I'd like to know what system and library calls to look for. I have no intention call these myself, so naturally futex() will come up, I'm sure many libraries will implement their locking calls in terms of this, etc. Update1 I'd like a list of function names or a link to documentation, that I should monitor at the ltrace and strace levels (and specifying which). Any other good advice about how to track and locate the global lock in mind would be great.

    Read the article

  • Web-Cam Video frames to System.Drawing.Bitmap

    - by Hooch
    Hello. I'm using AForge.NET framework to process some data from webcam. My Image processor is working. class WebProc { public void Process(System.Drawing.Bitmap image) { .... } } I know how to send image from HDD. I was thinking about something like that: private void backgroundWorker1_DoWork(object sender, DoWorkEventArgs e) { while(!backgroundWorker1.CancellationPending) { Bitmap Frame GetVidoFrameSomeWay(); WebProc.Process(Frame); } } Can you help me with those questions? How to get frame from webcam? How to list all available webcam for user to chose from? How to open Webcam setting window? Is there a way to set frame size to 640x480 o user size?

    Read the article

  • How do I manage specs in Scrum ?

    - by this. __curious_geek
    Referring to this buddy question, I want to know how one can manage specs in Scrum process ? I'm facing this problem while assigning tasks to my team for the sprint. Needless to say - I'm new to Agile/Scrum. Currently, we are using our own specs sheet to map StoryId to SpecId and vice versa. I'm getting the felling that Scrum is more about project management [getting things done on time] and you need a seperate process to manage specs and requirements. How do we manage specs in a Scrum process ?

    Read the article

  • Intercept windows open file

    - by HyLian
    Hello, I'm trying to make a small program that could intercept the open process of a file. The purpose is when an user double-click on a file in a given folder, windows would inform to the software, then it process that petition and return windows the data of the file. Maybe there would be another solution like monitoring Open messages and force Windows to wait while the program prepare the contents of the file. One application of this concept, could be to manage desencryption of a file in a transparent way to the user. In this context, the encrypted file would be on the disk and when the user open it ( with double-click on it or with some application such as notepad ), the background process would intercept that open event, desencrypt the file and give the contents of that file to the asking application. It's a little bit strange concept, it could be like "Man In The Middle" network concept, but with files instead of network packets. Thanks for reading.

    Read the article

  • Java memory mapped files and swap

    - by MarkS
    I'm looking at some memory mapped files in Java. Let's say I have a heap size set to 2gb, and I memory map a file that is 50gb - far more than the physical memory on the machine. The OS will cache parts of that 50gb file in the os file cache, the java process will have 2gb of heap space. What I'm curious about is how does the OS decide how much of the 50gb file to cache? For instance, if I have another java process, also with a 2gb heap size, will that 2gb be swapped out to allow the os to cache parts of the memory mapped file? Will parts of the heap space of the first process be swapped out to allow the OS to cache? Is there any way to tell the OS not to swap heap space for OS caching? If the OS doesn't swap out main processes, how does it determine how big its file cache should be?

    Read the article

< Previous Page | 199 200 201 202 203 204 205 206 207 208 209 210  | Next Page >