Search Results

Search found 24347 results on 974 pages for 'cross process'.

Page 133/974 | < Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >

  • Firefox and Chrome do not support cross-domian ajax by default?

    - by Ethan
    The following code works as expected in IE8 and Safari4, but not work in Firefox3.6 and Chrome. All browsers are on Windows. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> <link href="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8/themes/smoothness/jquery-ui.css" rel="stylesheet" type="text/css" /> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.4.2/jquery.min.js"></script> <script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jqueryui/1.8.1/jquery-ui.min.js"></script> <script type="text/javascript"> $(function() { $('#tabs').tabs(); }); </script> </head> <body> <div id="tabs"> <ul> <li><a href="http://www.google.com/">Google</a></li> <li><a href="http://www.msn.com/">MSN</a></li> </ul> </div> </body> </html> Seems that Firefox and Chrome do not support cross-domian ajax by default, right? Is there any easy way to turn on cross-domian ajax in Firefox and Chrome?

    Read the article

  • Is it still true, to make cross broswer layouts for desktop browsers using table+css is easier then

    - by metal-gear-solid
    My one of web designer friend still making sites with table but he use css very nicely and I also use css nicely but with <div> and i face cross browser problem in layout more than my friend. and i given some reason to my friend about cons of <table>. read my whole discussion with friend? I - you site will be problematic with screen reader My friend - OK, but i never got any call from any client regarding this. I - you will devote more time to make any changes in layout, if changes comes from client My friend - I don't think so, but if it is then show me how can i save time with <div>? I - your sites will not work well with search engine. My friend - it's not true. I've made many site and no problem with any site or client regarding this I - layout is old way, non w3c and non standard way. My friend - what is old and what is new, Who is W3C i don't know, What is standard? Whatever i make works in all browsers, it's enough for me and my client will not pay for standard and W3C guidelines rules I - Your site will not work in mobile browsers My friend - No problem for me, my client don't care about mobile phone I - Your sites are not accessible? My Friend - What do u mean not accessible? Whatever i make works in all browsers. my any client never asked about accessibility I - You will not get more work in future, with table? My friend - OK, no problem when clients will not accept site with table then i will learn about div based layouts in future. My questions? Is it still true, to make cross browser layouts for desktop browsers using table+css is easier then div+css? What is the benefit for developer to use DIV+CSS layout in place of <table> layouts if client would not mind if i use ?

    Read the article

  • Checking who is connected to your server, with PowerShell.

    - by Fatherjack
    There are many occasions when, as a DBA, you want to see who is connected to your SQL Server, along with how they are connecting and what sort of activities they are carrying out. I’m going to look at a couple of ways of getting this information and compare the effort required and the results achieved of each. SQL Server comes with a couple of stored procedures to help with this sort of task – sp_who and its undocumented counterpart sp_who2. There is also the pumped up version of these called sp_whoisactive, written by Adam Machanic which does way more than these procedures. I wholly recommend you try it out if you don’t already know how it works. When it comes to serious interrogation of your SQL Server activity then it is absolutely indispensable. Anyway, back to the point of this blog, we are going to look at getting the information from sp_who2 for a remote server. I wrote this Powershell script a week or so ago and was quietly happy with it for a while. I’m relatively new to Powershell so forgive both my rather low threshold for entertainment and the fact that something so simple is a moderate achievement for me. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server # connection and query stuff         $ConnectionStr = "Server=$Server;Database=Master;Integrated Security=True" $Query = "EXEC sp_who2" $Connection = new-object system.Data.SQLClient.SQLConnection $Table = new-object "System.Data.DataTable" $Connection.connectionstring = $ConnectionStr try{ $Connection.open() $Command = $Connection.CreateCommand() $Command.commandtext = $Query $result = $Command.ExecuteReader() $Table.Load($result) } catch{ # Show error $error[0] | format-list -Force } $Title = "Data access processes (" + $Table.Rows.Count + ")" $Table | Out-GridView -Title $Title $Connection.close() So this is pretty straightforward, create an SMO object that represents our chosen server, define a connection to the database and a table object for the results when we get them, execute our query over the connection, load the results into our table object and then, if everything is error free display these results to the PowerShell grid viewer. The query simply gets the results of ‘EXEC sp_who2′ for us. Depending on how many connections there are will influence how long the query runs. The grid viewer lets me sort and search the results so it can be a pretty handy way to locate troublesome connections. Like I say, I was quite pleased with this, it seems a pretty simple script and was working well for me, I have added a few parameters to control the output and give me more specific details but then I see a script that uses the $SMOServer object itself to provide the process information and saves having to define the connection object and query specifications. $Server = 'SERVERNAME' $SMOServer = New-Object Microsoft.SQLServer.Management.SMO.Server $Server $Processes = $SMOServer.EnumProcesses() $Title = "SMO processes (" + $Processes.Rows.Count + ")" $Processes | Out-GridView -Title $Title Create the SMO object of our server and then call the EnumProcesses method to get all the process information from the server. Staggeringly simple! The results are a little different though. Some columns are the same and we can see the same basic information so my first thought was to which runs faster – so that I can get my results more quickly and also so that I place less stress on my server(s). PowerShell comes with a great way of testing this – the Measure-Command function. All you have to do is wrap your piece of code in Measure-Command {[your code here]} and it will spit out the time taken to execute the code. So, I placed both of the above methods of getting SQL Server process connections in two Measure-Command wrappers and pressed F5! The Powershell console goes blank for a while as the code is executed internally when Measure-Command is used but the grid viewer windows appear and the console shows this. You can take the output from Measure-Command and format it for easier reading but in a simple comparison like this we can simply cross refer the TotalMilliseconds values from the two result sets to see how the two methods performed. The query execution method (running EXEC sp_who2 ) is the first set of timings and the SMO EnumProcesses is the second. I have run these on a variety of servers and while the results vary from execution to execution I have never seen the SMO version slower than the other. The difference has varied and the time for both has ranged from sub-second as we see above to almost 5 seconds on other systems. This difference, I would suggest is partly due to the cost overhead of having to construct the data connection and so on where as the SMO EnumProcesses method has the connection to the server already in place and just needs to call back the process information. There is also the difference in the data sets to consider. Let’s take a look at what we get and where the two methods differ Query execution method (sp_who2) SMO EnumProcesses Description - Urn What looks like an XML or JSON representation of the server name and the process ID SPID Spid The process ID Status Status The status of the process Login Login The login name of the user executing the command HostName Host The name of the computer where the  process originated BlkBy BlockingSpid The SPID of a process that is blocking this one DBName Database The database that this process is connected to Command Command The type of command that is executing CPUTime Cpu The CPU activity related to this process DiskIO - The Disk IO activity related to this process LastBatch - The time the last batch was executed from this process. ProgramName Program The application that is facilitating the process connection to the SQL Server. SPID1 - In my experience this is always the same value as SPID. REQUESTID - In my experience this is always 0 - Name In my experience this is always the same value as SPID and so could be seen as analogous to SPID1 from sp_who2 - MemUsage An indication of the memory used by this process but I don’t know what it is measured in (bytes, Kb, Mb…) - IsSystem True or False depending on whether the process is internal to the SQL Server instance or has been created by an external connection requesting data. - ExecutionContextID In my experience this is always 0 so could be analogous to REQUESTID from sp_who2. Please note, these are my own very brief descriptions of these columns, detail can be found from MSDN for columns in the sp_who results here http://msdn.microsoft.com/en-GB/library/ms174313.aspx. Where the columns are common then I would use that description, in other cases then the information returned is purely for interpretation by the reader. Rather annoyingly both result sets have useful information that the other doesn’t. sp_who2 returns Disk IO and LastBatch information which is really useful but the SMO processes method give you IsSystem and MemUsage which have their place in fault diagnosis methods too. So which is better? On reflection I think I prefer to use the sp_who2 method primarily but knowing that the SMO Enumprocesses method is there when I need it is really useful and I’m sure I’ll use it regularly. I’m OK with the fact that it is the slower method because Measure-Command has shown me how close it is to the other option and that it really isn’t a large enough margin to matter.

    Read the article

  • Forensic Analysis of the OOM-Killer

    - by Oddthinking
    Ubuntu's Out-Of-Memory Killer wreaked havoc on my server, quietly assassinating my applications, sendmail, apache and others. I've managed to learn what the OOM Killer is, and about its "badness" rules. While my machine is small, my applications are even smaller, and typically only half of my physical memory is in use, let alone swap-space, so I was surprised. I am trying to work out the culprit, but I don't know how to read the OOM-Killer logs. Can anyone please point me to a tutorial on how to read the data in the logs (what are ve, free and gen?), or help me parse these logs? Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 1, exc 2326 0 goal 2326 0... Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 1 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 1, exc 2326 0 red 61795 745 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 2, exc 122 0 goal 383 0... Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 1 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 2, exc 383 0 red 61795 745 Apr 20 20:03:27 EL135 kernel: kill_signal(13516.0): task ebb0c6f0, thg d33a1b00, sig 2 Apr 20 20:03:27 EL135 kernel: OOM killed process watchdog (pid=14490, ve=13516) exited, free=43104 gen=24501. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=4457, ve=13516) exited, free=43104 gen=24502. Apr 20 20:03:27 EL135 kernel: OOM killed process ntpd (pid=10816, ve=13516) exited, free=43104 gen=24503. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=27401, ve=13516) exited, free=43104 gen=24504. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=29009, ve=13516) exited, free=43104 gen=24505. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=10557, ve=13516) exited, free=49552 gen=24506. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=24983, ve=13516) exited, free=53117 gen=24507. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=29129, ve=13516) exited, free=68493 gen=24508. Apr 20 20:03:27 EL135 kernel: OOM killed process sendmail-mta (pid=941, ve=13516) exited, free=68803 gen=24509. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=12418, ve=13516) exited, free=69330 gen=24510. Apr 20 20:03:27 EL135 kernel: OOM killed process python (pid=22953, ve=13516) exited, free=72275 gen=24511. Apr 20 20:03:27 EL135 kernel: OOM killed process apache2 (pid=6624, ve=13516) exited, free=76398 gen=24512. Apr 20 20:03:27 EL135 kernel: OOM killed process python (pid=23317, ve=13516) exited, free=94285 gen=24513. Apr 20 20:03:27 EL135 kernel: OOM killed process tail (pid=29030, ve=13516) exited, free=95339 gen=24514. Apr 20 20:03:28 EL135 kernel: OOM killed process apache2 (pid=20583, ve=13516) exited, free=101663 gen=24515. Apr 20 20:03:28 EL135 kernel: OOM killed process logger (pid=12894, ve=13516) exited, free=101694 gen=24516. Apr 20 20:03:28 EL135 kernel: OOM killed process bash (pid=21119, ve=13516) exited, free=101849 gen=24517. Apr 20 20:03:28 EL135 kernel: OOM killed process atd (pid=991, ve=13516) exited, free=101880 gen=24518. Apr 20 20:03:28 EL135 kernel: OOM killed process apache2 (pid=14649, ve=13516) exited, free=102748 gen=24519. Apr 20 20:03:28 EL135 kernel: OOM killed process grep (pid=21375, ve=13516) exited, free=132167 gen=24520. Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 4, exc 4215 0 goal 4826 0... Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): task ede29370, thg df98b880, sig 1 Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 4, exc 4826 0 red 189481 331 Apr 20 20:03:57 EL135 kernel: kill_signal(13516.0): task ede29370, thg df98b880, sig 2 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 5, exc 3564 0 goal 3564 0... Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): task c6c90110, thg cdb1a100, sig 1 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 5, exc 3564 0 red 189481 331 Apr 20 20:04:53 EL135 kernel: kill_signal(13516.0): task c6c90110, thg cdb1a100, sig 2 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): selecting to kill, queued 0, seq 6, exc 8071 0 goal 8071 0... Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): task d7294050, thg c03f42c0, sig 1 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): selected 1, signalled 1, queued 1, seq 6, exc 8071 0 red 189481 331 Apr 20 20:07:14 EL135 kernel: kill_signal(13516.0): task d7294050, thg c03f42c0, sig 2 Watchdog is a watchdog task, that was idle; nothing in the logs to suggest it had done anything for days. Its job is to restart one of the applications if it dies, so a bit ironic that it is the first to get killed. Tail was monitoring a few logs files. Unlikely to be consuming memory madly. The apache web-server only serves pages to a little old lady who only uses it to get to church on Sundays a couple of developers who were in bed asleep, and hadn't visited a page on the site for a few weeks. The only traffic it might have had is from the port-scanners; all the content is password-protected and not linked from anywhere, so no spiders are interested. Python is running two separate custom applications. Nothing in the logs to suggest they weren't humming along as normal. One of them was a relatively recent implementation, which makes suspect #1. It doesn't have any data-structures of any significance, and normally uses only about 8% of the total physical RAW. It hasn't misbehaved since. The grep is suspect #2, and the one I want to be guilty, because it was a once-off command. The command (which piped the output of a grep -r to another grep) had been started at least 30 minutes earlier, and the fact it was still running is suspicious. However, I wouldn't have thought grep would ever use a significant amount of memory. It took a while for the OOM killer to get to it, which suggests it wasn't going mad, but the OOM killer stopped once it was killed, suggesting it may have been a memory-hog that finally satisfied the OOM killer's blood-lust.

    Read the article

  • Xubuntu 13.10 64bit - Slow and buggy "log out" process?

    - by MrKatSwordfish
    I'm a Windows convert who has done only a little bit of dabbling in Ubuntu in the past (back in Dapper Drake a few years back). A lot has changes since then, and I've been yearning to jump back into linux again! So, having just bought a new SSD, I felt that this would be as good of a time as any to set up a dual-boot system again. I've messed around with Ubuntu 13.10 a bit, and while Unity has its issues, I think that it still needs some time to develop. I looked into XFCE and liked it a lot, so I went with Xubuntu. I've installed Xubuntu, and for the most part it's running smoothly and it a pleasure to work with. The customization is great and the minimalistic look and feel is really nice! But here's my problem, whenever I select the "Log Out" option from either the application menu, or the user profiles menu, my PC comes to a crawl, and the dialog box with all the options (shut down, restart, log out, etc.) takes maybe a minute or more to appear. I click the log out button, my PC is brought to a snail's pace, and I have to wait for what seems like an eternity for the logout options to appear! If i try to open something else (even a terminal window) while it's loading the logout options, that other program won't finish loading until the logout screen finally appears. Keep in mind, this is a pretty much vanilla install of Xubuntu 13.10 64bit, on a PC with an intel i7, an SSD, 6gb DDR3 RAM, and a new AMD 7770 gpu (drivers haven't been installed yet, though). Everything else runs fast, most applications open near-instantly! It must be an issue with how the logout options screen initializes or something, but I'm not sure exactly how I can fix it.. Edit - Extra Info: This problem is very consistent when using the "Log Out" buttons in Xubuntu. However, I've found that I'm able to reboot and shutdown much more quickly by going through the "Switch User" screen, and using the reboot or shutdown buttons on that screen. I'm nearly certain that it has something to do with the little Log Out options screen that appears when I select Log Out from the menu, and not the actual process of shutting down.. So what should I do? I really like XFCE so far, and I've never tried a non-ubuntu based distro before, but should I just switch to something else? Is there any known fix for this issue? Are there any work-arounds for logging out/shutting down/rebooting via the terminal so that I can avoid this irritating bug? Is there any that I can monitor the progress of the log out via terminal, allowing me to see which parts are causing the slow-down? What is the best way to report this bug to someone?

    Read the article

  • Tech Talk: Can we put "Simple" in Application Business Process Management?

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Customers are challenged with looking for answers to basic questions: Why can't it be simpler to connect applications in cloud to applications on-premise? How do I make my business more responsive and agile? How do I create end-to-end processes joining multiple applications? Tune in to this Tech Talk session with Amit Zavery, Vice President of Product Management for Oracle Fusion Middleware as he discusses the relevance and importance of business process management and how Oracle BPM Suite is benefiting customers across all industries. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Cambria","serif"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} For other Fusion Middleware talks, subscribe to Fusion Middleware Radio today and visit us on oracle.com

    Read the article

  • CentOS 5.5 remote kickstart installation stalls at "Starting install process." How to debug?

    - by ewwhite
    Hello, I'm having a difficult time with a remote CentOS 5.5 kickstart installation on an HP ProLiant DL360 G6. This is in an environment where I maintain an internal CentOS yum repository. The kickstart installation and post scripts have been tested and normally work. This hardware is also common in this environment, so I do not believe that it is a factor. Unfortunately, I'm having problems with a specific server install. The system is remote to the yum repository at a distance of 500 miles. They are connected over a private low-latency 100-megabit layer 2 connection (26ms round-trip). I'm mounting the 10mb CentOS 5 netinstall ISO image via an HP ILO remote console. The initial boot parameters are: linux ks=http://yum.abctrading.com/prop.cfg ksdevice=eth0 ip=x.x.x.x dns=x.x.x.x netmask=255.255.255.0 gateway=x.x.x.x I'm using the url --url http://ks.abctrading.com/5.5/os/x86_64/ method of installation. This quickly boots into the anaconda installer, pulls the kickstart config and formats the drives. The process eventually halts at the screen below, reading "Starting install process.". Going to the other virtual consoles give the second image below. The process stalls at this point and cannot proceed with the rest of the installation. Running the same kickstart config locally works just fine. I've tried mounting the boot ISO from the console as well as from the ILO2 command line pointing to a locally-hosted boot ISO via http. How can I debug this? Are there any options I've overlooked?

    Read the article

  • localhost/127.0.0.1 not working, "Unable to connect"

    - by redconservatory
    I am running some pretty basic php sites on Snow Leopard. Usually I just go to my browser and type anything like: localhost http://localhost 127.0.0.1 mycomputername.local But suddenly, after installing a gem file (compass) none of this is working. I tried sudo apachectl restart Thinking that I just needed to restart apache, but no luck. My error log looks like: [Mon Mar 26 09:39:08 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45223 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45043 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45438 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45049 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45439 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45224 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45440 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45441 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45442 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:10 2012] [warn] child process 45443 still did not exit, sending a SIGTERM [Mon Mar 26 09:39:11 2012] [notice] caught SIGTERM, shutting down I also tried sudo apachectl -k start And I got the error: Syntax error on line 182 of /private/etc/apache2/httpd.conf: Illegal option When I look at the code around that line, I see: <Directory /> Options Indexes MultiViews + FollowSymLinks AllowOverride All Order allow, deny Allow from all </Directory>

    Read the article

  • How can I confirm that burn process was completed successfully??

    - by infant programmer
    Just now I tried to make a duplicate copy of my Windows XP Pro SP3 CD. I had set 32X speed while the max speed allowed is 48X. And checked "Verify the Data after burning". I didn't observe how much % did it complete. But after 15 mins (which is sufficient period to burn disc), I heard "Fatal Error" sound. When I saw the monitor, There was a message box which was showing "Error while burning Disc", I saved the error log which you can find here click_me. How do I acknowledge that burning process is done?? Well. my PC is able to read the CD and CD is bootable too.. I am not sure .. whether the burning process got failed or the verify Process! Please let me know if you have come across or aware of such situation .. As it is XP installation CD .. I don't want to do trial and error method. regards,

    Read the article

  • download and process a file by ftp at set intervals, with error handling, rescheduling and status messages

    - by compound eye
    I want to download a data file from a remote ftp server to my machine at regular intervals. Once the file is downloaded I want to call another script which will process the file. My development machine is mac os x, the eventual deployment environment is linux. What's would be the stock standard way to automate this? I know I can use cron to schedule curl to download and to run a script that will process the downloaded file at regular intervals, and I know could write a slightly more complex script or an application that would do this and add error handling, rescheduling and sending status emails. But one of my requirements for this project is to write as little custom code as possible, instead I should try to use standard, tried and true existing tools, and if I do have to write code, to try and write the most straightforward code possible. The reason for this is the code will potentially be installed on a large number of machines, all of which will need to be tweaked, customised and maintained by different people, long after I am gone from the project, so the intention is to use well documented, well supported tools as much as possible. This seems such a common task, there must be tools and scripts all over the internet, written by people who have carefully considered everything that could possibly go wrong when you need to download and process a file from a remote server at regular intervals, with error handling, rescheduling and sending status messages. Is that what Expect is for? What would you recommend? (the system will be downloading weather prediction data every six hours, so that the system can prepare in the event of bad weather warnings)

    Read the article

  • How can I confirm that burn process is completed successfully??

    - by infant programmer
    Just now I tried to make a duplicate copy of my Windows XP Pro SP3 CD. I had set 32X speed while the max speed allowed is 48X. And checked "Verify the Data after burning". I didn't observe how much % did it complete. But after 15 mins (which is sufficient period to burn disc), I heard "Fatal Error" sound. When I saw the monitor, There was a message box which was showing "Error while burning Disc", I saved the error log which you can find here click_me. How do I acknowledge that burning process is done?? Well. I my PC is able to read the cd and it is bootable too.. I am not sure .. whether the burning process got failed or the verify Process! Please let me know if you have come across or aware of such situation .. As it is XP installation CD .. I don't want to do trial and error method. regards,

    Read the article

  • How to find out which process is hogging the linux server?

    - by user1149518
    We have a RHEL server. Today it suddenly became slow. Symptoms - It was responding slow to ping queries from other server. When I try to login using ssh, it was taking about 10 seconds to login. I was able to resolve the problem by doing some guess work. I killed one process which I thought was culprit. Which resolved the problem. Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness. These were the conditions when the server was slow - # vmstat 3 3 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 1 176 6730868 285052 4899676 0 0 3 4 0 0 1 1 97 1 0 0 0 176 6751576 285064 4899704 0 0 0 115 15307 37171 1 1 96 3 0 0 0 176 6751948 285068 4899700 0 0 0 23 14813 39559 1 1 98 1 0 # top top - 16:38:18 up 150 days, 19:36, 64 users, load average: 1.68, 1.46, 1.44 Tasks: 1287 total, 2 running, 1284 sleeping, 1 stopped, 0 zombie Cpu(s): 1.3%us, 1.7%sy, 0.1%ni, 95.9%id, 0.7%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 16620824k total, 9867124k used, 6753700k free, 287424k buffers Swap: 8193140k total, 176k used, 8192964k free, 4898996k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 26258 khk 34 19 130m 47m 7088 S 11.2 0.3 385:32.42 edm Though I would like to know what's proper approach to detect the culprit in such kind of "slow server" situations. Le me know proper way to resolving such slowness issues and decting the process causing the slowness.

    Read the article

  • What Counts For a DBA: Simplicity

    - by Louis Davidson
    Too many computer processes do an apparently simple task in a bizarrely complex way. They remind me of this strip by one of my favorite artists: Rube Goldberg. In order to keep the boss from knowing one was late, a process is devised whereby the cuckoo clock kisses a live cuckoo bird, who then pulls a string, which triggers a hat flinging, which in turn lands on a rod that removes a typewriter cover…and so on. We rely on creating automated processes to keep on top of tasks. DBAs have a lot of tasks to perform: backups, performance tuning, data movement, system monitoring, and of course, avoiding being noticed.  Every day, there are many steps to perform to maintain the database infrastructure, including: checking physical structures, re-indexing tables where needed, backing up the databases, checking those backups, running the ETL, and preparing the daily reports and yes, all of these processes have to complete before you can call it a day, and probably before many others have started that same day. Some of these tasks are just naturally complicated on their own. Other tasks become complicated because the database architecture is excessively rigid, and we often discover during “production testing” that certain processes need to be changed because the written requirements barely resembled the actual customer requirements.   Then, with no time to change that rigid structure, we are forced to heap layer upon layer of code onto the problematic processes. Instead of a slight table change and a new index, we end up with 4 new ETL processes, 20 temp tables, 30 extra queries, and 1000 lines of SQL code.  Report writers then need to build reports and make magical numbers appear from those toxic data structures that are overly complex and probably filled with inconsistent data. What starts out as a collection of fairly simple tasks turns into a Goldbergian nightmare of daily processes that are likely to cause your dinner to be interrupted by the smartphone doing the vibration dance that signifies trouble at the mill. So what to do? Well, if it is at all possible, simplify the problem by either going into the code and refactoring the complex code to simple, or taking all of the processes and simplifying them into small, independent, easily-tested steps.  The former approach usually requires an agreement on changing underlying structures that requires countless mind-numbing meetings; while the latter can generally be done to any complex process without the same frustration or anger, though it will still leave you with lots of steps to complete, the ability to test each step independently will definitely increase the quality of the overall process (and with each step reporting status back, finding an actual problem within the process will be definitely less unpleasant.) We all know the principle behind simplifying a sequence of processes because we learned it in math classes in our early years of attending school, starting with elementary school. In my 4 years (ok, 9 years) of undergraduate work, I remember pretty much one thing from my many math classes that I apply daily to my career as a data architect, data programmer, and as an occasional indentured DBA: “show your work”. This process of showing your work was my first lesson in simplification. Each step in the process was in fact, far simpler than the entire process.  When you were working an equation that took both sides of 4 sheets of paper, showing your work was important because the teacher could see every step, judge it, and mark it accordingly.  So often I would make an error in the first few lines of a problem which meant that the rest of the work was actually moving me closer to a very wrong answer, no matter how correct the math was in the subsequent steps. Yet, when I got my grade back, I would sometimes be pleasantly surprised. I passed, yet missed every problem on the test. But why? While I got the fact that 1+1=2 wrong in every problem, the teacher could see that I was using the right process. In a computer process, the process is very similar. We take complex processes, show our work by storing intermediate values, and test each step independently. When a process has 100 steps, each step becomes a simple step that is tested and verified, such that there will be 100 places where data is stored, validated, and can be checked off as complete. If you get step 1 of 100 wrong, you can fix it and be confident (that if you did your job of testing the other steps better than the one you had to repair,) that the rest of the process works. If you have 100 steps, and store the state of the process exactly once, the resulting testable chunk of code will be far more complex and finding the error will require checking all 100 steps as one, and usually it would be easier to find a specific needle in a stack of similarly shaped needles.  The goal is to strive for simplicity either in the solution, or at least by simplifying every process down to as many, independent, testable, simple tasks as possible.  For the tasks that really can’t be done completely independently, minimally take those tasks and break them down into simpler steps that can be tested independently.  Like working out division problems longhand, have each step of the larger problem verified and tested.

    Read the article

  • User-Defined Customer Events & their impact (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what is the impact in having a user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events that extend or simulate base customer events, the ones that are included in the base product, ensure that there is no other impact either direct or indirect to other business functions that the application has to offer.  

    Read the article

  • Limitations of User-Defined Customer Events (FA Type Profile)

    - by Rajesh Sharma
    CC&B automatically creates field activities when a specific Customer Event takes place. This depends on the way you have setup your Field Activity Type Profiles, the templates within, and associated SP Condition(s) on the template. CC&B uses the service point type, its state and referenced customer event to determine which field activity type to generate.   Customer events available in the base product include: Cut for Non-payment (CNP) Disconnect Warning (DIWA) Reconnect for Payment (REPY) Reread (RERD) Stop Service (STOP) Start Service (STRT) Start/Stop (STSP)   Note the Field values/codes defined for each event.   CC&B comes with a flexibility to define new set of customer events. These can be defined in the Look Up - CUST_EVT_FLG. Values from the Look Up are used on the Field Activity Type Profile Template page.     So what's the use of having user-defined Customer Events? And how will the system detect such events in order to create field activity(s)?   Well, system can only detect such events when you reference a user-defined customer event on a Severance Event Type for an event type Create Field Activities.     This way you can create additional field activities of a specific field activity type for user-defined customer events.   One of our customers adopted this feature and created a user-defined customer event CNPW - Cut for Non-payment for Water Services. This event was then linked on a Field Activity Type Profile and referenced on a Severance Event - CUT FOR NON PAY-W. The associated Severance Process was configured to trigger a reconnection process if it was cancelled (done by defining a Post Cancel Algorithm). Whenever this Severance Event was executed, a specific type of Field Activity was generated for disconnection purposes. The Field Activity type was determined by the system from the Field Activity Type Profile referenced for the SP Type, SP's state and the referenced user-defined customer event. All was working well until the time when they realized that in spite of the Severance Process getting cancelled (when a payment was made); the Post Cancel Algorithm was not executed to start a Reconnection Severance Process for the purpose of generating a reconnection field activity and reconnecting the service.   Basically, the Post Cancel algorithm (if specified on a Severance Process Template) is triggered when a Severance Process gets cancelled because a credit transaction has affected/relieved a Service Agreement's debt.   So what exactly was happening? Now we come to actual question as to what are limitations in having user-defined customer event.   System defined/base customer events are hard-coded across the entire system. There is an impact even if you remove any customer event entry from the Look Up. User-defined customer events are not recognized by the system anywhere else except in the severance process, as described above.   There are few programs which have routines to first validate the completion of disconnection field activities, which were raised as a result of customer event CNP - Cut for Non-payment in order to perform other associated actions. One such program is the Post Cancel Algorithm, referenced on a Severance Process Template, generally used to reconnect services which were disconnected from other Severance Event, specifically CNP - Cut for Non-Payment. Post cancel algorithm provided by the product - SEV POST CAN does the following (below is the algorithm's description):   This algorithm is called after a severance process has been cancelled (typically because the debt was paid and the SA is no longer eligible to be on the severance process). It checks to see if the process has a completed 'disconnect' event and, if so, starts a reconnect process using the Reconnect Severance Process Template defined in the parameter.    Notice the underlined text. This algorithm implicitly checks for Field Activities having completed status, which were generated from Severance Events as a result of CNP - Cut for Non-payment customer event.   Now if we look back to the customer's issue, we can relate that the Post Cancel algorithm was triggered, but was not able to find any 'Completed' CNP - Cut for Non-payment related field activity. And hence was not able to start a reconnection severance process. This was because a field activity was generated and completed for a customer event CNPW - Cut for Non-payment of Water Services instead.   To conclude, if you introduce new customer events, you should be aware that you don't extend or simulate base customer events, the ones that are included in the base product, as they are further used to provide/validate additional business functions.  

    Read the article

  • SpringBatch Jaxb2Marshaller: different name of class and xml attribute

    - by user588961
    I try to read an xml file as input for spring batch: Java Class: package de.example.schema.processes.standardprocess; @XmlAccessorType(XmlAccessType.FIELD) @XmlType(name = "Process", namespace = "http://schema.example.de/processes/process", propOrder = { "input" }) public class Process implements Serializable { @XmlElement(namespace = "http://schema.example.de/processes/process") protected ProcessInput input; public ProcessInput getInput() { return input; } public void setInput(ProcessInput value) { this.input = value; } } SpringBatch dev-job.xml: <bean id="exampleReader" class="org.springframework.batch.item.xml.StaxEventItemReader" scope="step"> <property name="fragmentRootElementName" value="input" /> <property name="resource" value="file:#{jobParameters['dateiname']}" /> <property name="unmarshaller" ref="jaxb2Marshaller" /> </bean> <bean id="jaxb2Marshaller" class="org.springframework.oxm.jaxb.Jaxb2Marshaller"> <property name="classesToBeBound"> <list> <value>de.example.schema.processes.standardprocess.Process</value> <value>de.example.schema.processes.standardprocess.ProcessInput</value> ... </list> </property> </bean> Input file: <?xml version="1.0" encoding="UTF-8"?> <process:process xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:process="http://schema.example.de/processes/process"> <process:input> ... </process:input> </process:process> It fires the following exception: [javax.xml.bind.UnmarshalException: unexpected element (uri:"http://schema.example.de/processes/process", local:"input"). Expected elements are <<{http://schema.example.de/processes/process}processInput] at org.springframework.oxm.jaxb.JaxbUtils.convertJaxbException(JaxbUtils.java:92) at org.springframework.oxm.jaxb.AbstractJaxbMarshaller.convertJaxbException(AbstractJaxbMarshaller.java:143) at org.springframework.oxm.jaxb.Jaxb2Marshaller.unmarshal(Jaxb2Marshaller.java:428) If I change to in xml it work's fine. Unfortunately I can change neither the xml nor the java class. Is there a possibility to make Jaxb2Marshaller map the element 'input' to the class 'ProcessInput'?

    Read the article

  • Is there an IDE that can simplify the process of creating a game matchmaking website?

    - by Scott
    Yes, I'm an old guy. And I'm well versed in "C" and have written several games which I have been selling on the web for a number of years. And now, I would like to adapt one of my games to be "online". Sounds simple. I'm sure I can use the thousands of lines of "C" code that I've already written. Right? So my initial investigation begins. First, I think I'll need a server program that lives on a dedicated server (or a VPS probably) that talks to a bunch of client applications that live on individual devices around the world. I can certainly handle that! (I think to myself). I'll break up my existing game into two pieces, a client piece that is just the game displays and buttons, and a server piece that does everything else. Piece of cake, right? But that means that the "server piece" must be executed on a remote machine somewhere and run 24/7. Can I do that? [apparently, that question is so basic, so uneducated, and so lame, that nobody has ever posed it before. Because hours of Googling does not yield an answer. Fine. I'll assume I can do that and move on.] I'll need a "game room", which to me means a website where you log in and then go to a lobby of some kind where you can setup your preferences, see if any of your friends are connected, and create or join games. Should be easy, but it's not. No way. Can I do all this with my local website builder? (which happens to be 90 Second Website Builder, a nice product, btw). It turns out, I can not. I can start with that, but must modify each page, so I can interact with my sql database. So I begin making each page a "PHP" page and dynamically modifying the HTML code with PHP code. I'm already starting to get a headache. Because the resulting web pages looked terrible, I began looking at using JQuery. I want to user a JQuery dialog on my website to display a list of friends and allow the user to select one to invite to the game. [google search for "how to populate a JQuery dialog from a sql database" yields nothing but more confusion.] Javascript? Java? HTML? XML? HTML5? PHP? JQuery? Flash? Sockets? Forms? CSS? Learning about each one of these, and how they interact with each other and/or depend on each other is too much for my feeble old brain. Can anyone simplify this process for me? Is there an IDE that will help me do all this without having to go back to college for a few years? Thanks, Scott

    Read the article

  • On a BPM Mission with Process Accelerators. Part 1: BPM as an ATV

    - by Cesare Rotundo
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Part 1: BPM as an ATV It’s always exciting to talk to customers that are in the middle of a BPM transformational journey. Their thirst for new processes to improve with BPM makes them explorers in a landscape of opportunities. They have discovered that with BPM the can “go places” they couldn’t reach before. In a way, learning how to generate value with BPM is like adopting a new mean of transportation. Apps are like regular cars: very efficient, but to be used on paved roads: the road/process has been traced, and there are fixed paths to follow to get from “opportunity to quote” or from “quote to cash”. Getting off the road is risky, and laying down new asphalt is slow and expensive. Custom development is like running: you can go virtually anywhere, following any path you like, yet it’s slow, and a lot of sweat. BPM allows you to go “off the beaten path” laid out by packaged apps, yet make fast progress compared to custom development. BPM is therefore more like an All-Terrain Vehicle (ATV): less efficient than a car, but much faster than running, with a powerful enough engine that can get you places. The similarities between BPM and ATVs don’t stop here: you must learn to ride it even if you already know how to drive a car; you can reach places but figuring out the path to your destination is harder. Ultimately, with BPM as with an ATV, you reach places that you thought you could never reach, and you discover new destinations that provide great benefit to you … and that you didn’t even know existed! That’s where the sense of accomplishment that we heard from our BPM customers comes from, as well as the desire to share their experience, or even, as in the case of a County, the willingness to contribute their BPM solutions to help other agencies that face the same challenges. The question we wanted to answer is how can we teach organizations to drive ATV/BPM, thus leading them to deeper success with BPM, while increasing their awareness of the potential for reaching new targets, and finally equip them with the right tools. Like with ATVs, getting from point A to point B is more of a work of art than cruising on the highway by car. There is a lot we can do: after all many sought after destinations are common: someone else has been on the same path before. If only you could learn from their experience …

    Read the article

  • ??Database Replay Capture????

    - by Liu Maclean(???)
    Database Replay?11g??????,??workload capture??????????????,???????? ??Workload Capture???????: ???????????????,???????2????,??????,???????????OLTP???????capture 10????1G???? ?????: ????????????????????? ??startup restrict????,?????????? ??capture???restrict?? ????????????? ???????????????: ??scn???????? ???????? ???????? Capture???????????workload????? ???????SYSDBA?SYSOPER????OS?? ????: ?TPCC???capture??????4.5% ????session????64KB??? ???Workload Capture?????????? ????????2?, ??RAC????workload capture  file??????????????,??start_capture????? ????session????64KB???,??????????????workload  capture file????Server Process??????,?????????parse???execution????,Server Process??LOGON?LOGOFF?SQL??????????PGA?,???WCR Capture PG?WCR Capture PGA?,?PGA?????????????????,Server Process???????????WCR???,?????WCR???Server Process??’WCR: capture file IO write’????? ?WCR?????????: SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production PL/SQL Release 11.2.0.3.0 - Production CORE 11.2.0.3.0 Production TNS for Linux: Version 11.2.0.3.0 - Production NLSRTL Version 11.2.0.3.0 - Production SQL> select name from v$event_name where name like '%WCR%'; NAME ---------------------------------------------------------------- WCR: replay client notify WCR: replay clock WCR: replay lock order WCR: replay paused WCR: RAC message context busy WCR: capture file IO write WCR: Sync context busy latch: WCR: sync latch: WCR: processes HT 11g????????WCR???LATCH 1* select name,gets from v$latch where name like '%WCR%' SQL> / NAME GETS ------------------------------ ---------- WCR: kecu cas mem 3 WCR: kecr File Count 37 WCR: MMON Create dir 1 WCR: ticker cache 0 WCR: sync 495 WCR: processes HT 0 WCR: MTS VC queue 0 7 rows selected. ????????????Database Replay Capture????? 1. ????capture dbms_workload_capture.start_capture CREATE OR REPLACE DIRECTORY dbcapture AS '/home/oracle/dbcapture'; execute dbms_workload_capture.start_capture('CAPTURE','DBCAPTURE',default_action=>'INCLUDE'); SQL> select id,name,status,start_time,end_time,connects,user_calls,dir_path from dba_workload_captures where id = (select max(id) from dba_workload_captures) ; ID ---------- NAME -------------------------------------------------------------------------------- STATUS START_TIM END_TIME CONNECTS ---------------------------------------- --------- --------- ---------- USER_CALLS ---------- DIR_PATH -------------------------------------------------------------------------------- 1 CAPTURE IN PROGRESS 08-DEC-12 11 ID ---------- NAME -------------------------------------------------------------------------------- STATUS START_TIM END_TIME CONNECTS ---------------------------------------- --------- --------- ---------- USER_CALLS ---------- DIR_PATH -------------------------------------------------------------------------------- 167 /home/oracle/dbcapture 2. ?? capture file?? [oracle@mlab2 dbcapture]$ ls -lR .: total 8 drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 cap drwxr-xr-x 3 oracle oinstall 4096 Dec 8 07:24 capfiles -rw-r--r-- 1 oracle oinstall 0 Dec 8 07:24 wcr_cap_00001.start ./cap: total 4 -rw-r--r-- 1 oracle oinstall 91 Dec 8 07:24 wcr_scapture.wmd ./capfiles: total 4 drwxr-xr-x 12 oracle oinstall 4096 Dec 8 07:24 inst1 ./capfiles/inst1: total 40 drwxr-xr-x 2 oracle oinstall 4096 Dec 8 08:31 aa drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ab drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ac drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ad drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ae drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 af drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ag drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ah drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 ai drwxr-xr-x 2 oracle oinstall 4096 Dec 8 07:24 aj ./capfiles/inst1/aa: total 316 -rw-r--r-- 1 oracle oinstall 1762 Dec 8 07:25 wcr_c6cdah0000001.rec -rw-r--r-- 1 oracle oinstall 16478 Dec 8 07:28 wcr_c6cf1h0000002.rec -rw-r--r-- 1 oracle oinstall 1772 Dec 8 07:29 wcr_c6cjdh0000004.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:29 wcr_c6cnah0000005.rec -rw-r--r-- 1 oracle oinstall 1821 Dec 8 07:41 wcr_c6cpfh0000007.rec -rw-r--r-- 1 oracle oinstall 1815 Dec 8 07:33 wcr_c6cq6h000000a.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:34 wcr_c6cxmh000000h.rec -rw-r--r-- 1 oracle oinstall 1427 Dec 8 07:41 wcr_c6cxvh000000j.rec -rw-r--r-- 1 oracle oinstall 1425 Dec 8 07:41 wcr_c6czph000000k.rec -rw-r--r-- 1 oracle oinstall 2398 Dec 8 07:49 wcr_c6dqfh000000q.rec -rw-r--r-- 1 oracle oinstall 259321 Dec 8 08:35 wcr_c6du7h000000r.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 07:55 wcr_c6f6yh000000t.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 08:28 wcr_c6h3qh0000013.rec ./capfiles/inst1/ab: total 0 ./capfiles/inst1/ac: total 0 ./capfiles/inst1/ad: total 0 ./capfiles/inst1/ae: total 0 ./capfiles/inst1/af: total 0 ./capfiles/inst1/ag: total 0 ./capfiles/inst1/ah: total 0 ./capfiles/inst1/ai: total 0 ./capfiles/inst1/aj: total 0 [oracle@mlab2 dbcapture]$ cd ./capfiles/inst1/aa [oracle@mlab2 aa]$ ls -l total 316 -rw-r--r-- 1 oracle oinstall 1762 Dec 8 07:25 wcr_c6cdah0000001.rec -rw-r--r-- 1 oracle oinstall 16478 Dec 8 07:28 wcr_c6cf1h0000002.rec -rw-r--r-- 1 oracle oinstall 1772 Dec 8 07:29 wcr_c6cjdh0000004.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:29 wcr_c6cnah0000005.rec -rw-r--r-- 1 oracle oinstall 1821 Dec 8 07:41 wcr_c6cpfh0000007.rec -rw-r--r-- 1 oracle oinstall 1815 Dec 8 07:33 wcr_c6cq6h000000a.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:34 wcr_c6cxmh000000h.rec -rw-r--r-- 1 oracle oinstall 1427 Dec 8 07:41 wcr_c6cxvh000000j.rec -rw-r--r-- 1 oracle oinstall 1425 Dec 8 07:41 wcr_c6czph000000k.rec -rw-r--r-- 1 oracle oinstall 2398 Dec 8 07:49 wcr_c6dqfh000000q.rec -rw-r--r-- 1 oracle oinstall 259321 Dec 8 08:35 wcr_c6du7h000000r.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 07:55 wcr_c6f6yh000000t.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 08:28 wcr_c6h3qh0000013.rec [oracle@mlab2 aa]$ ls -l |wc -l 14 ???????14??? 3. ??LOGON????Server Process [oracle@mlab2 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.3.0 Production on Sat Dec 8 08:37:40 2012 Copyright (c) 1982, 2011, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options ?????wcr?? [oracle@mlab2 aa]$ ls -ltr total 316 -rw-r--r-- 1 oracle oinstall 1762 Dec 8 07:25 wcr_c6cdah0000001.rec -rw-r--r-- 1 oracle oinstall 16478 Dec 8 07:28 wcr_c6cf1h0000002.rec -rw-r--r-- 1 oracle oinstall 1772 Dec 8 07:29 wcr_c6cjdh0000004.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:29 wcr_c6cnah0000005.rec -rw-r--r-- 1 oracle oinstall 1815 Dec 8 07:33 wcr_c6cq6h000000a.rec -rw-r--r-- 1 oracle oinstall 1535 Dec 8 07:34 wcr_c6cxmh000000h.rec -rw-r--r-- 1 oracle oinstall 1425 Dec 8 07:41 wcr_c6czph000000k.rec -rw-r--r-- 1 oracle oinstall 1427 Dec 8 07:41 wcr_c6cxvh000000j.rec -rw-r--r-- 1 oracle oinstall 1821 Dec 8 07:41 wcr_c6cpfh0000007.rec -rw-r--r-- 1 oracle oinstall 2398 Dec 8 07:49 wcr_c6dqfh000000q.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 07:55 wcr_c6f6yh000000t.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 08:28 wcr_c6h3qh0000013.rec -rw-r--r-- 1 oracle oinstall 259321 Dec 8 08:35 wcr_c6du7h000000r.rec -rw-r--r-- 1 oracle oinstall 0 Dec 8 08:37 wcr_c6hp4h0000018.rec ??????wcr_c6hp4h0000018.rec ??? SQL> select spid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)); SPID ------------------------ 14293 ????????????????14293, ???????????????,??????wcr_c6hp4h0000018.rec [oracle@mlab2 ~]$ ls -l /proc/14293/fd total 0 lr-x------ 1 oracle oinstall 64 Dec 8 08:39 0 -> /dev/null l-wx------ 1 oracle oinstall 64 Dec 8 08:39 1 -> /dev/null lrwx------ 1 oracle oinstall 64 Dec 8 08:39 10 -> /u01/app/oracle/product/11201/db_1/rdbms/audit/CRMV_ora_14293_1.aud l-wx------ 1 oracle oinstall 64 Dec 8 08:39 11 -> /u01/app/oracle/diag/rdbms/crmv/CRMV/trace/CRMV_ora_14293.trc l-wx------ 1 oracle oinstall 64 Dec 8 08:39 12 -> pipe:[34585895] l-wx------ 1 oracle oinstall 64 Dec 8 08:39 13 -> /u01/app/oracle/diag/rdbms/crmv/CRMV/trace/CRMV_ora_14293.trm l-wx------ 1 oracle oinstall 64 Dec 8 08:39 2 -> /dev/null lr-x------ 1 oracle oinstall 64 Dec 8 08:39 3 -> /dev/null lr-x------ 1 oracle oinstall 64 Dec 8 08:39 4 -> /dev/null lr-x------ 1 oracle oinstall 64 Dec 8 08:39 5 -> /u01/app/oracle/product/11201/db_1/rdbms/mesg/oraus.msb lr-x------ 1 oracle oinstall 64 Dec 8 08:39 6 -> /proc/14293/fd lr-x------ 1 oracle oinstall 64 Dec 8 08:39 7 -> /dev/zero lrwx------ 1 oracle oinstall 64 Dec 8 08:39 8 -> /home/oracle/dbcapture/capfiles/inst1/aa/wcr_c6hp4h0000018.rec lr-x------ 1 oracle oinstall 64 Dec 8 08:39 9 -> pipe:[34585894] ?????lsof?? [root@mlab2 ~]# lsof|grep wcr_c6hp4h0000018.rec oracle 14293 oracle 8u REG 8,1 0 17629644 /home/oracle/dbcapture/capfiles/inst1/aa/wcr_c6hp4h0000018.rec ????????,??Server Process????WCR REC??,?Server Process LOGON?????? 3.????SQL??: SQL> select 1 from dual; 1 ---------- 1 SQL> / 1 ---------- 1 [oracle@mlab2 aa]$ strings wcr_c6hp4h0000018.rec ==»????SQL????, ??????? ??????SQL???,???????????????WCR??????,LOGON???????????SQL????,????????? [oracle@mlab2 aa]$ strings wcr_c6hp4h0000018.rec 11.2.0.3.0 *File header info. (Shadow process='14293') D0576B5D710A34F4E043B201A8C0ECFE SYS; NLS_LANGUAGE? AMERICAN> NLS_TERRITORY? AMERICA> NLS_CURRENCY? NLS_ISO_CURRENCY? AMERICA> NLS_NUMERIC_CHARACTERS? NLS_CALENDAR? GREGORIAN> NLS_DATE_FORMAT? DD-MON-RR> NLS_DATE_LANGUAGE? AMERICAN> NLS_CHARACTERSET? AL32UTF8> NLS_SORT? BINARY> NLS_TIME_FORMAT? HH.MI.SSXFF AM> NLS_TIMESTAMP_FORMAT? DD-MON-RR HH.MI.SSXFF AM> NLS_TIME_TZ_FORMAT? HH.MI.SSXFF AM TZR> NLS_TIMESTAMP_TZ_FORMAT? DD-MON-RR HH.MI.SSXFF AM TZR> NLS_DUAL_CURRENCY? NLS_SPECIAL_CHARS? NLS_NCHAR_CHARACTERSET? UTF8> NLS_COMP? BINARY> NLS_LENGTH_SEMANTICS? BYTE> NLS_NCHAR_CONV_EXCP? FALSE (DESCRIPTION=(ADDRESS=(PROTOCOL=beq)(PROGRAM=/u01/app/oracle/product/11201/db_1/bin/oracle)(ARGV0=oracleCRMV)(ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=beq)))')(DETACH=NO))(CONNECT_DATA=(CID=(PROGRAM=sqlplus)(HOST=mlab2.oracle.com)(USER=oracle)))) ,[email protected] (TNS V1-V3)U tselect spid from v$process where addr = ( select paddr from v$session where sid=(select distinct sid from v$mystat)) ` _ select 1 from dual select 1 from dual ??????????????????? [oracle@mlab2 aa]$ strings wcr_c6hp4h0000018.rec 9`9_^B create table vva(t1 int) `:_i :`:_iB `;_^ ;`;_^B create table vva(t1 int) `_i >`>_iB FusC `?_^ ?`?_^B FvWC _begin for i in 1..50000 loop execute immediate 'select 1 from dual where 2='||i; end loop; end; ?SERVER PROCESS LOGOFF ??????? C`E_ B k^2C ????Server Process????parse?execution???WCR??,??????????PGA?,????????????,????????,?????WCR???????????,???????? 4. ?????? SQL> oradebug setmypid Statement processed. SQL> oradebug dump processstate 10; Statement processed. SQL> oradebug tracefile_name /u01/app/oracle/diag/rdbms/crmv/CRMV/trace/CRMV_ora_14293.trc ?processstate ??????????????? WCR: capture file IO write,??Server process??WCR ?? 3: waited for 'SQL*Net message to client' driver id=0x62657100, #bytes=0x1, =0x0 wait_id=139 seq_num=140 snap_id=1 wait times: snap=0.000007 sec, exc=0.000007 sec, total=0.000007 sec wait times: max=infinite wait counts: calls=0 os=0 occurred after 0.934091 sec of elapsed time 4: waited for 'latch: shared pool' address=0x60106b20, number=0x133, tries=0x0 wait_id=138 seq_num=139 snap_id=1 wait times: snap=0.000066 sec, exc=0.000066 sec, total=0.000066 sec wait times: max=infinite wait counts: calls=0 os=0 occurred after 1.180690 sec of elapsed time 5: waited for 'WCR: capture file IO write' =0x0, =0x0, =0x0 wait_id=137 seq_num=138 snap_id=1 wait times: snap=0.000189 sec, exc=0.000189 sec, total=0.000189 sec wait times: max=infinite wait counts: calls=0 os=0 occurred after 3.122783 sec of elapsed time 6: waited for 'WCR: capture file IO write' =0x0, =0x0, =0x0 wait_id=136 seq_num=137 snap_id=1 wait times: snap=0.000191 sec, exc=0.000191 sec, total=0.000191 sec wait times: max=infinite wait counts: calls=0 os=0 occurred after 3.053132 sec of elapsed time 7: waited for 'WCR: capture file IO write' 5.??PGA???? SQL> oradebug dump heapdump 536870917; Statement processed. grep WCR /u01/app/oracle/diag/rdbms/crmv/CRMV/trace/CRMV_ora_14293.trc Chunk 7fb1b606bfc0 sz= 65600 freeable "WCR Capture PG " ds=0x7fb1b6115f90 Chunk 7fb1b6111e18 sz= 4224 freeable "WCR Capture PG " ds=0x7fb1b6115f90 Chunk 7fb1b6112e98 sz= 4184 freeable "WCR Capture PG " ds=0x7fb1b6115f90 Chunk 7fb1b6113ef0 sz= 4224 freeable "WCR Capture PG " ds=0x7fb1b6115f90 Chunk 7fb1b6114f70 sz= 4104 recreate "WCR Capture PG " latch=(nil) Chunk 7fb1b6115f78 sz= 160 freeable "WCR Capture PGA" Chunk 7fb1b6116018 sz= 3248 freeable "WCR Capture PGA" Subheap ds=0x7fb1b6115f90 heap name= WCR Capture PG size= 82336 HEAP DUMP heap name="WCR Capture PG" desc=0x7fb1b6115f90 FIVE LARGEST SUB HEAPS for heap name="WCR Capture PG" desc=0x7fb1b6115f9 PGA???WCR Capture PG ?WCR Capture PGA?freeable or recreate??chunk,???????Server Process???OS Chunk 7fb1b606bfc0 sz= 65600 freeable "WCR Capture PG " ds=0x7fb1b6115f90 sz= 65600=» 64k ??????????64k??,???????????????64k WCR????????????:)! 6.???? ??WCR CAPTURE????????2? SQL> SELECT x.ksppinm NAME, y.ksppstvl VALUE, x.ksppdesc describ 2 FROM SYS.x$ksppi x, SYS.x$ksppcv y 3 WHERE x.inst_id = USERENV ('Instance') 4 AND y.inst_id = USERENV ('Instance') 5 AND x.indx = y.indx 6 AND x.ksppinm in ('_capture_buffer_size','_wcr_control'); NAME VALUE DESCRIB -------------------- -------------------- ------------------------------------------------------------ _wcr_control 0 Oracle internal test WCR parameter used ONLY for testing! _capture_buffer_size 65536 To set the size of the PGA I/O recording buffers ??_capture_buffer_size ??PGA?WCR BUFFER?SIZE,???64k _wcr_control ??WCR?????,?????? ????,??????: 1. ???WCR WORKLOAD CAPTURE???????????,??Server Process????(????)2. ???server process????WCR??3. Server Proess???LOGON?LOGOFF?SQL?????????WCR???4. Server Process????????Immediate mode,????????PGA?(WCR Capture) subheap?,??????????????(timeout?????)5. ????, Server Process????????Immediate mode,?capture????parse??execution??(?????capture???parse?????????????,parse????capture???),?????LOGON?SQL??(???????)??PGA?WCR Capture?????,???????,????????,??tpcc??????4.5%6. ????_capture_buffer_size ??PGA?WCR BUFFER?SIZE,???64k7. WCR Capture?????binrary 2????,?????,????????????????WCR capture file8. WCR: capture file IO write?????Server Process??WCR??

    Read the article

  • How to deal with Denial of Service attack and Session fixation and Cross Site request forgery in Rai

    - by Gautam
    Hi, I have just started learning Ruby on Rails. I happened to look for prevention of DNS attacks in Rails and ended up reading about DNS, Session fixation and Cross Site request forgery in Rails? How do you prevent all the above three attacks?? Could you suggest me a good tutorial on how to deal with attack in RoR? Looking forward for your help Thanks in advance Regards, Gautam

    Read the article

  • Prevent Cross-site request forgery - Never Rely on The SessionID Sent to Your Server in The Cookie H

    - by Yan Cheng CHEOK
    I am reading the tutorial at http://code.google.com/p/google-web-toolkit-incubator/wiki/LoginSecurityFAQ It states Remember - you must never rely on the sessionID sent to your server in the cookie header ; look only at the sessionID that your GWT app sends explicitly in the payload of messages to your server. Is it use to prevent http://en.wikipedia.org/wiki/Cross-site_request_forgery#Example_and_characteristics With this mythology, is it sufficient enough to prevent to above attack?

    Read the article

< Previous Page | 129 130 131 132 133 134 135 136 137 138 139 140  | Next Page >