Search Results

Search found 7081 results on 284 pages for 'idle processing'.

Page 17/284 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Detect number of IDLE processors ruby

    - by Yannick Wurm
    Hello, I work on shared linux machines with between 4 and 24 cores. To make best use of them, I use the following code to detect the number of processors from my ruby scripts: return `cat /proc/cpuinfo | grep processor | wc -l`.to_i (perhaps there is a pure-ruby way of doing this?) But sometimes a colleague is using six or eight of the 24 cores. (as seen via top). How can I get an estimate of the number of currently unused processors that I can use without making anyone upset? Thanks!

    Read the article

  • Prevent lock of windows detecting user idle time

    - by Zimelemon
    I work in a Windows Terminal Server enviroment where if you left the computer for a while, windows lock the session and terminal is power off. What i need is the code needed to send a message to Windows for it to believe the user is in front of the PC (mouse or keyboard activity). Thanks in advance

    Read the article

  • how to disable RemoteApp sessions lock if idle for 10 minutes and require no user needs to input password to unlock?

    - by Carlos Sanchez
    RemoteApp sessions lock if idle for 10 minutes, user needs to input password to unlock. My users are running an application from Win2008 Terminal server using RemoteApp. If the application remains idle for 10 minutes it gets "locked" and the user is required to enter username and password to continue using it. This is VERY VERY annoying as the app usually sits idle for bout 20-30 minutes, used for 1 min... repeat.

    Read the article

  • Determining idle network transfer bandwidth

    - by rwmnau
    I'm building an application that will move around some potentially large files, but I want to do it without disturbing the user's network connection by flooding it. I know that Windows BITS has this kind of functionality, and that's essentially what I'm looking to replicate (as far as the throttling goes). I know BITS has other functionality as well that I'm not interested in, and I also have the option to consume it from .NET, but I'm interested in how it works. I've looked online, and I haven't found a clear explanation of how exactly BITS determines how much bandwidth to consume, aside from a vague "BITS polls activity to watch for a drop in the bandwidth used by other programs." What does this mean? Bandwidth consumed by other programs can drop for a number of other reasons as well - can BITS tell the difference? If I was looking for a process that replicated this "stay just under the radar, where the user won't notice the transfers" functionality, how would I go about doing it?

    Read the article

  • Why is Python 3.1.3 in the header listed as a syntax error?

    - by squashua
    Hi, I'm a newbie programmer so I'll do my best to clearly ask my question. I'm running Python scripts in Mac 10.6.5 and now trying to write and save to a text file (following instructions in HeadsUp Python book). Whenever I hit function+F5 (as instructed) I get the same "invalid syntax" error and Idle highlights the "1" in "Python 3.1.3" of the header. Here's the header to which I'm referring: Python 3.1.3 (r313:86882M, Nov 30 2010, 09:55:56) [GCC 4.0.1 (Apple Inc. build 5494)] on darwin Type "copyright", "credits" or "license()" for more information. Extremely frustrating. I've checked and rechecked the code but this doesn't seem to be code related because the "syntax error" is in regards to the header text that posts in every Idle/Python session. Help anyone? Thanks, Squash

    Read the article

  • How to fix "Sub-process /usr/bin/dpkg returned an error code (1)" when installing and upgrading packages?

    - by soum
    I am getting this error whenever tring to install or update anything: "Sub-process /usr/bin/dpkg returned an error code (1)" I need help, as I cannot install or upgrade any packages on my Ubuntu 11.10 system. Here is the rest of the error: unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argument `triggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument `triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argument `triggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • can't install anything ,getting error "Sub-process /usr/bin/dpkg returned an error code (1)"

    - by soum
    i am getting error whenever tring to install or update anything. "Sub-process /usr/bin/dpkg returned an error code (1)" please help me i am just stopped with my ubuntu 11.10. no installation or update. th unknown argument `triggered' dpkg: error processing mtools (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager-pptp-gnome ... No apport report written because MaxReports is reached already postinst called with unknown argument `triggered' dpkg: error processing network-manager-pptp-gnome (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-pptp ... postinst called with unknown argument triggered' dpkg: error processing network-manager-pptp (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for network-manager-gnome ... /var/lib/dpkg/info/network-manager-gnome.postinst called with unknown argumenttriggered' dpkg: error processing network-manager-gnome (--configure): subprocess installed post-installation script returned error exit status 1 Processing triggers for network-manager ... No apport report written because MaxReports is reached already /var/lib/dpkg/info/network-manager.postinst called with unknown argument triggered' dpkg: error processing network-manager (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Processing triggers for mscompress ... postinst called with unknown argumenttriggered' dpkg: error processing mscompress (--configure): subprocess installed post-installation script returned error exit status 1 No apport report written because MaxReports is reached already Errors were encountered while processing: netbase mtr-tiny module-init-tools mountmanager mono-4.0-gac mousetweaks mozilla-plugin-vlc mtools network-manager-pptp-gnome network-manager-pptp network-manager-gnome network-manager mscompress E: Sub-process /usr/bin/dpkg returned an error code (1)

    Read the article

  • Uneditable file and Unreadable(for further processing) file( WHY? ) after processing it through C++

    - by mgj
    Hi...:) This might look to be a very long question to you I understand, but trust me on this its not long. I am not able to identify why after processing this text is not being able to be read and edited. I tried using the ord() function in python to check if the text contains any Unicode characters( non ascii characters) apart from the ascii ones.. I found quite a number of them. I have a strong feeling that this could be due to the original text itself( The INPUT ). Input-File: Just copy paste it into a file "acle5v1.txt" The objective of this code below is to check for upper case characters and to convert it to lower case and also to remove all punctuations so that these words are taken for further processing for word alignment #include<iostrea> #include<fstream> #include<ctype.h> #include<cstring> using namespace std; ifstream fin2("acle5v1.txt"); ofstream fin3("acle5v1_op.txt"); ofstream fin4("chkcharadded.txt"); ofstream fin5("chkcharntadded.txt"); ofstream fin6("chkprintchar.txt"); ofstream fin7("chknonasci.txt"); ofstream fin8("nonprinchar.txt"); int main() { char ch,ch1; fin2.seekg(0); fin3.seekp(0); int flag = 0; while(!fin2.eof()) { ch1=ch; fin2.get(ch); if (isprint(ch))// if the character is printable flag = 1; if(flag) { fin6<<"Printable character:\t"<<ch<<"\t"<<(int)ch<<endl; flag = 0; } else { fin8<<"Non printable character caught:\t"<<ch<<"\t"<<int(ch)<<endl; } if( isalnum(ch) || ch == '@' || ch == ' ' )// checks for alpha numeric characters { fin4<<"char added: "<<ch<<"\tits ascii value: "<<int(ch)<<endl; if(isupper(ch)) { //tolower(ch); fin3<<(char)tolower(ch); } else { fin3<<ch; } } else if( ( ch=='\t' || ch=='.' || ch==',' || ch=='#' || ch=='?' || ch=='!' || ch=='"' || ch != ';' || ch != ':') && ch1 != ' ' ) { fin3<<' '; } else if( (ch=='\t' || ch=='.' || ch==',' || ch=='#' || ch=='?' || ch=='!' || ch=='"' || ch != ';' || ch != ':') && ch1 == ' ' ) { //fin3<<" '; } else if( !(int(ch)>=0 && int(ch)<=127) ) { fin5<<"Char of ascii within range not added: "<<ch<<"\tits ascii value: "<<int(ch)<<endl; } else { fin7<<"Non ascii character caught(could be a -ve value also)\t"<<ch<<int(ch)<<endl; } } return 0; } I have a similar code as the above written in python which gives me an otput which is again not readable and not editable The code in python looks like this: #!/usr/bin/python # -*- coding: UTF-8 -*- import sys input_file=sys.argv[1] output_file=sys.argv[2] list1=[] f=open(input_file) for line in f: line=line.strip() #line=line.rstrip('.') line=line.replace('.','') line=line.replace(',','') line=line.replace('#','') line=line.replace('?','') line=line.replace('!','') line=line.replace('"','') line=line.replace('?','') line=line.replace('|','') line = line.lower() list1.append(line) f.close() f1=open(output_file,'w') f1.write(' '.join(list1)) f1.close() the file takes ip and op at runtime.. as: python punc_remover.py acle5v1.txt acle5v1_op.txt The output of this file is in "acle5v1_op.txt" now after processing this particular output file is needed for further processing. This particular file "aclee5v1_op.txt" is the UNREADABLE Aand UNEDITABLE File that I am not being able to use for further processing. I need this for Word alignment in NLP. I tried readin this output with the following program #include<iostream> #include<fstream> using namespace std; ifstream fin1("acle5v1_op.txt"); ofstream fout1("chckread_acle5v1_op.txt"); ofstream fout2("chcknotread_acle5v1_op.txt"); int main() { char ch; int flag = 0; long int r = 0; long int nr = 0; while(!(fin1)) { fin1.get(ch); if(ch) { flag = 1; } if(flag) { fout1<<ch; flag = 0; r++; } else { fout2<<"Char not been able to be read from source file\n"; nr++; } } cout<<"Number of characters able to be read: "<<r; cout<<endl<<"Number of characters not been able to be read: "<<nr; return 0; } which prints the character if its readable and if not it doesn't print them but I observed the output of both the file is blank thus I could draw a conclusion that this file "acle5v1_op.txt" is UNREADABLE AND UNEDITABLE. Could you please help me on how to deal with this problem.. To tell you a bit about the statistics wrt the original input file "acle5v1.txt" file it has around 3441 lines in it and around 3 million characters in it. Keeping in mind the number of characters in the file you editor might/might not be able to manage to open the file.. I was able to open the file in gedit of Fedora 10 which I am currently using .. This is just to notify you that opening with a particular editor was not actually an issue at least in my case... Can I use scripting languages like Python and Perl to deal with this problem if Yes how? could please be specific on that regard as I am a novice to Perl and Python. Or could you please tell me how do I solve this problem using C++ itself.. Thank you...:) I am really looking forward to some help or guidance on how to go about this problem....

    Read the article

  • There seems to be some 'lingering' SSH connections on my server. How do I fix it?

    - by mike
    [root@server mike]# w 14:43:35 up 83 days, 1:25, 1 user, load average: 0.00, 0.00, 0.00 USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT mike pts/1 dsl-IP.w 14:43 0.00s 0.01s 0.03s sshd: mike [priv] [root@server mike]# ps aux | grep ssh root 1350 0.0 0.1 5276 1044 ? Ss Aug27 0:00 /usr/sbin/sshd root 14328 0.0 0.2 8020 2580 ? Ss 12:49 0:00 sshd: dave [priv] dave 14332 0.0 0.1 8020 1532 ? S 12:49 0:00 sshd: dave@notty dave 14333 0.0 0.1 4696 1444 ? Ss 12:49 0:00 /usr/lib/openssh/sftp-server root 14344 0.0 0.2 8020 2580 ? Ss 12:59 0:00 sshd: dave [priv] dave 14347 0.0 0.1 8168 1564 ? S 13:00 0:00 sshd: dave@notty dave 14348 0.0 0.1 4700 1504 ? Ss 13:00 0:00 /usr/lib/openssh/sftp-server root 14351 0.0 0.2 8020 2580 ? Ss 13:04 0:00 sshd: dave [priv] dave 14355 0.0 0.1 8168 1560 ? S 13:04 0:00 sshd: dave@notty dave 14356 0.0 0.1 4696 1472 ? Ss 13:04 0:00 /usr/lib/openssh/sftp-server root 14373 0.0 0.2 8020 2584 ? Ss 13:15 0:00 sshd: dave [priv] dave 14377 0.0 0.1 8168 1560 ? S 13:15 0:00 sshd: dave@notty dave 14378 0.0 0.1 4704 1500 ? Ss 13:15 0:00 /usr/lib/openssh/sftp-server root 14385 0.0 0.2 8020 2584 ? Ss 13:28 0:00 sshd: dave [priv] dave 14389 0.0 0.1 8168 1592 ? S 13:28 0:00 sshd: dave@notty dave 14390 0.0 0.1 4696 1508 ? Ss 13:28 0:00 /usr/lib/openssh/sftp-server root 14392 0.0 0.2 8020 2588 ? Ss 13:30 0:00 sshd: dave [priv] dave 14396 0.0 0.1 8168 1604 ? S 13:30 0:00 sshd: dave@notty dave 14397 0.0 0.1 4696 1492 ? Ss 13:30 0:00 /usr/lib/openssh/sftp-server root 14402 0.0 0.2 8020 2584 ? Ss 13:33 0:00 sshd: dave [priv] dave 14406 0.0 0.1 8020 1536 ? S 13:33 0:00 sshd: dave@notty dave 14407 0.0 0.1 4696 1460 ? Ss 13:33 0:00 /usr/lib/openssh/sftp-server root 14428 0.0 0.2 8020 2584 ? Ss 13:45 0:00 sshd: dave [priv] dave 14432 0.0 0.1 8168 1580 ? S 13:45 0:00 sshd: dave@notty dave 14433 0.0 0.1 4704 1512 ? Ss 13:45 0:00 /usr/lib/openssh/sftp-server root 14439 0.0 0.2 8020 2580 ? Ss 13:53 0:00 sshd: dave [priv] dave 14443 0.0 0.1 8020 1532 ? S 13:53 0:00 sshd: dave@notty dave 14444 0.0 0.1 4696 1448 ? Ss 13:53 0:00 /usr/lib/openssh/sftp-server root 14480 0.0 0.2 8020 2584 ? Ss 14:11 0:00 sshd: dave [priv] dave 14484 0.0 0.1 8168 1588 ? S 14:11 0:00 sshd: dave@notty dave 14485 0.0 0.1 4704 1492 ? Ss 14:11 0:00 /usr/lib/openssh/sftp-server root 14487 0.0 0.2 8020 2580 ? Ss 14:12 0:00 sshd: dave [priv] dave 14490 0.0 0.1 8020 1552 ? S 14:12 0:00 sshd: dave@notty dave 14492 0.0 0.1 4696 1472 ? Ss 14:12 0:00 /usr/lib/openssh/sftp-server root 14510 0.0 0.2 8020 2584 ? Ss 14:35 0:00 sshd: dave [priv] dave 14514 0.0 0.1 8168 1568 ? S 14:35 0:00 sshd: dave@notty dave 14515 0.0 0.1 4700 1492 ? Ss 14:35 0:00 /usr/lib/openssh/sftp-server root 14517 0.0 0.2 8020 2580 ? Ss 14:37 0:00 sshd: dave [priv] dave 14521 0.0 0.1 8020 1548 ? S 14:38 0:00 sshd: dave@notty dave 14522 0.0 0.1 4696 1464 ? Ss 14:38 0:00 /usr/lib/openssh/sftp-server root 14538 0.0 0.2 8020 2620 ? Ss 14:43 0:00 sshd: mike [priv] mike 14542 0.0 0.1 8020 1560 ? S 14:43 0:00 sshd: mike@pts/1 root 14554 0.0 0.0 1720 560 pts/1 S+ 14:43 0:00 grep ssh As you can see above, I, mike, am logged into SSH executing commands. This is shown from the w command. However, there's an odd amount of SSH related processes currently running. I figured dave's sftp session might not show up in the output of w for whatever reason but that doesn't explain all the running processes... What's wrong? :/

    Read the article

  • Tellago is still hiring….

    - by gsusx
    Tellago 's SOA practice is rapidly growing and we are still hiring. In that sense, we are looking to for Connected Systems (WCF, BizTalk, WF) experts who are passionate about building game changing solutions with the latest Microsoft technologies. You will be working alongside technology gurus like DonXml , Pablo Cibraro or Dwight Goins . If you are interested and not afraid of working with a bunch of crazy people ;)please drop me a line at jesus dot rodriguez at tellago dot com. Hope to hear from...(read more)

    Read the article

  • We are hiring (take a minute to read this, is not another BS talk ;) )

    - by gsusx
    I really wanted to wait until our new website was out to blog about this but I hope you can put up with the ugly website for a few more days J. Tellago keeps growing and, after a quick break at the beginning of the year, we are back in hiring mode J. We are currently expanding our teams in the United States and Argentina and have various positions open in the following categories. .NET developers: If you are an exceptional .NET programmer with a passion for creating great software solutions working...(read more)

    Read the article

  • Back from Teched US

    - by gsusx
    It's been a few weeks since I last blogged and, trust me, I am not happy about it :( I have been crazily busy with some of our projects at Tellago which you are going to hear more about in the upcoming weeks :) I was so busy that I didn't even have time to blog about my sessions at Teched US last week. This year I ended up presenting three sessions on three different tracks: BIE403 | Real-Time Business Intelligence with Microsoft SQL Server 2008 R2 Session Type: Breakout Session Real-time business...(read more)

    Read the article

  • Tellago && Tellago Studios 2010

    - by gsusx
    With 2011 around the corner we, at Tellago and Tellago Studios , we have been spending a lot of times evaluating our successes and failures (yes those too ;)) of 2010 and delineating some of our goals and strategies for 2011. When I look at 2010 here are some of the things that quickly jump off the page: Growing Tellago by 300% Launching a brand new company: Tellago Studios Expanding our customer base Establishing our business intelligence practice http://tellago.com/what-we-say/events/business-intelligence...(read more)

    Read the article

  • Limiting DOPs &ndash; Who rules over whom?

    - by jean-pierre.dijcks
    I've gotten a couple of questions from Dan Morgan and figured I start to answer them in this way. While Dan is running on a big system he is running with Database Resource Manager and he is trying to make sure the system doesn't go crazy (remember end user are never, ever crazy!) on very high DOPs. Q: How do I control statements with very high DOPs driven from user hints in queries? A: The best way to do this is to work with DBRM and impose limits on consumer groups. The Max DOP setting you can set in DBRM allows you to overwrite the hint. Now let's go into some more detail here. Assume my object (and for simplicity we assume there is a single object - and do remember that we always pick the highest DOP when in doubt and when conflicting DOPs are available in a query) has PARALLEL 64 as its setting. Assume that the query that selects something cool from that table lives in a consumer group with a max DOP of 32. Assume no goofy things (like running out of parallel_max_servers) are happening. A query selecting from this table will run at DOP 32 because DBRM caps the DOP. As of 11.2.0.1 we also use the DBRM cap to create the original plan (at compile time) and not just enforce the cap at runtime. Now, my user is smart and writes a query with a parallel hint requesting DOP 128. This query is still capped by DBRM and DBRM overrules the hint in the statement. The statement, despite the hint, runs at DOP 32. Note that in the hinted scenario we do compile the statement with DOP 128 (the optimizer obeys the hint). This is another reason to use table decoration rather than hints. Q: What happens if I set parallel_max_servers higher than processes (e.g. the max number of processes allowed to run on my machine)? A: Processes rules. It is important to understand that processes are fixed at startup time. If you increase parallel_max_servers above the number of processes in the processes parameter you should get a warning in the alert log stating it can not take effect. As a follow up, a hinted query requesting more parallel processes than either parallel_max_servers or processes will not be able to acquire the requested number. Parallel_max_processes will prevent this. And since parallel_max_servers should be lower than max processes you can never go over either...

    Read the article

  • Tellago speaks about Business Intellligence with SQL Server 2008 R2

    - by gsusx
    At Tellago , we always try to stay in the frontlines of technology that can enhance our solution development practices. This year we are putting a lot of emphasis on business intelligence and in particular the new set of BI technologies such as Microsoft's PowerPivot, Master Data Services and StreamInsight that are scheduled to be release with SQL Server 2008 R2. In the last few weeks we have been working closely with different Microsoft field offices to coordinate a series of customers events that...(read more)

    Read the article

  • Shader effect similar to Metro 2033 gasmask

    - by Tim
    I was thinking about effects in games the other day and I was reminded of the Gasmask effect from Metro 2033. Once you put the gasmask on it blurred a bit in the corners and could ice up and even get cracked. I assume that something like that is done using a shader. I have been experimenting a bit with game development, so far mostly playing with existing rendering engines and adding physics support etc. I would like to learn more about this sort of effect. Can someone give me a simple example of a shader that would alter the entire scene like this. Or if not a shader then an idea on how it would be done. Thanks. Edit : Include screenshot of the metro 2033 gasmask effect.

    Read the article

  • Partition Wise Joins II

    - by jean-pierre.dijcks
    One of the things that I did not talk about in the initial partition wise join post was the effect it has on resource allocation on the database server. When Oracle applies a different join method - e.g. not PWJ - what you will see in SQL Monitor (in Enterprise Manager) or in an Explain Plan is a set of producers and a set of consumers. The producers scan the tables in the the join. If there are two tables the producers first scan one table, then the other. The producers thus provide data to the consumers, and when the consumers have the data from both scans they do the join and give the data to the query coordinator. Now that behavior means that if you choose a degree of parallelism of 4 to run such query with, Oracle will allocate 8 parallel processes. Of these 8 processes 4 are producers and 4 are consumers. The consumers only actually do work once the producers are fully done with scanning both sides of the join. In the plan above you can see that the producers access table SALES [line 11] and then do a PX SEND [line 9]. That is the producer set of processes working. The consumers receive that data [line 8] and twiddle their thumbs while the producers go on and scan CUSTOMERS. The producers send that data to the consumer indicated by PX SEND [line 5]. After receiving that data [line 4] the consumers do the actual join [line 3] and give the data to the QC [line 2]. BTW, the myth that you see twice the number of processes due to the setting PARALLEL_THREADS_PER_CPU=2 is obviously not true. The above is why you will see 2 times the processes of the DOP. In a PWJ plan the consumers are not present. Instead of producing rows and giving those to different processes, a PWJ only uses a single set of processes. Each process reads its piece of the join across the two tables and performs the join. The plan here is notably different from the initial plan. First of all the hash join is done right on top of both table scans [line 8]. This query is a little more complex than the previous so there is a bit of noise above that bit of info, but for this post, lets ignore that (sort stuff). The important piece here is that the PWJ plan typically will be faster and from a PX process number / resources typically cheaper. You may want to look out for those plans and try to get those to appear a lot... CREDITS: credits for the plans and some of the info on the plans go to Maria, as she actually produced these plans and is the expert on plans in general... You can see her talk about explaining the explain plan and other optimizer stuff over here: ODTUG in Washington DC, June 27 - July 1 On the Optimizer blog At OpenWorld in San Francisco, September 19 - 23 Happy joining and hope to see you all at ODTUG and OOW...

    Read the article

  • SQL University: Parallelism Week - Introduction

    - by Adam Machanic
    Welcome to Parallelism Week at SQL University . My name is Adam Machanic, and I'm your professor. Imagine having 8 brains, or 16, or 32. Imagine being able to break up complex thoughts and distribute them across your many brains, so that you could solve problems faster. Now quit imagining that, because you're human and you're stuck with only one brain, and you only get access to the entire thing if you're lucky enough to have avoided abusing too many recreational drugs. For your database server,...(read more)

    Read the article

  • Integrating BizTalk Server and StreamInsight paper

    - by gsusx
    With all the holidays madness I didn't realized that my "Integrating BizTalk Server and StreamInsight" paper is now available on MSDN . This paper was originally an idea of the BizTalk product team and intends to present some fundamental scenarios that can be enabled by the combination of BizTalk Server and StreamInsight. Thanks to everybody who, directly or indirectly, provided feedback about this paper: Syed Rasheed, Mark Simms , Richard Seroter , Roman Schindlauer and Torsten Grabs from the StreamInsight...(read more)

    Read the article

  • Query Tuning Mastery at PASS Summit 2012: The Video

    - by Adam Machanic
    An especially clever community member was kind enough to reverse-engineer the video stream for me, and came up with a direct link to the PASS TV video stream for my Query Tuning Mastery: The Art and Science of Manhandling Parallelism talk, delivered at the PASS Summit last Thursday. I'm not sure how long this link will work , but I'd like to share it for my readers who were unable to see it in person or live on the stream. Start here. Skip past the keynote, to the 149 minute mark. Enjoy!...(read more)

    Read the article

  • Query Tuning Mastery at PASS Summit 2012: The Video

    - by Adam Machanic
    An especially clever community member was kind enough to reverse-engineer the video stream for me, and came up with a direct link to the PASS TV video stream for my Query Tuning Mastery: The Art and Science of Manhandling Parallelism talk, delivered at the PASS Summit last Thursday. I'm not sure how long this link will work , but I'd like to share it for my readers who were unable to see it in person or live on the stream. Start here. Skip past the keynote, to the 149 minute mark. Enjoy!...(read more)

    Read the article

  • Query Tuning Mastery at PASS Summit 2012: The Demos

    - by Adam Machanic
    For the second year in a row, I was asked to deliver a 500-level "Query Tuning Mastery" talk in room 6E of the Washington State Convention Center, for the PASS Summit. ( Here's some information about last year's talk, on workspace memory. ) And for the second year in a row, I had to deliver said talk at 10:15 in the morning, in a room used as overflow for the keynote, following a keynote speaker that didn't stop speaking on time. Frustrating! Last Thursday, after very, very quickly setting up and...(read more)

    Read the article

  • Tools for modelling data and workflows using structured text files

    - by Alexey
    Consider a case when I want to try some idea of an application. But I want to avoid investing a lot of effort in coding UI/work flows/database schema etc before I see that it's going to be useful to me (as example of potential user). My idea is stay lightweight and put all the data in text files. So the components could be following: Domain objects are represented by text files or their fragments Domain objects are grouped by their type using directories Structure the files using some both human- and machine-friendly format, e.g. YAML Use some smart text editor (e.g. vim, emacs, rubymine) to edit and navigate those files Use color schemes and macros/custom commands of the text editor to effectively manipulate those files Use scripts (or a lightweight web framework like Sinatra) to try some business logic ideas on top of the data model The question is: Are there tools or toolkits that support or can be adopted to this approach? Also any ideas, links to articles/other knowledge sources are very welcome. And more specific question: What is the simplest way to index and update index of files with YAML files?

    Read the article

  • Stereo images rectification and disparity: which algorithms?

    - by alessandro.francesconi
    I'm trying to figure out what are currently the two most efficent algorithms that permit, starting from a L/R pair of stereo images created using a traditional camera (so affected by some epipolar lines misalignment), to produce a pair of adjusted images plus their depth information by looking at their disparity. Actually I've found lots of papers about these two methods, like: "Computing Rectifying Homographies for Stereo Vision" (Zhang - seems one of the best for rectification only) "Three-step image recti?cation" (Monasse) "Rectification and Disparity" (slideshow by Navab) "A fast area-based stereo matching algorithm" (Di Stefano - seems a bit inaccurate) "Computing Visual Correspondence with Occlusions via Graph Cuts" (Kolmogorov - this one produces a very good disparity map, with also occlusion informations, but is it efficient?) "Dense Disparity Map Estimation Respecting Image Discontinuities" (Alvarez - toooo long for a first review) Anyone could please give me some advices for orienting into this wide topic? What kind of algorithm/method should I treat first, considering that I'll work on a very simple input: a pair of left and right images and nothing else, no more information (some papers are based on additional, pre-taken, calibration infos)? Speaking about working implementations, the only interesting results I've seen so far belongs to this piece of software, but only for automatic rectification, not disparity: http://stereo.jpn.org/eng/stphmkr/index.html I tried the "auto-adjustment" feature and seems really effective. Too bad there is no source code...

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >