Search Results

Search found 6053 results on 243 pages for 'solutionsfactory usage'.

Page 79/243 | < Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >

  • Weird glitches on Intel iGPU

    - by FrederikVds
    I have a weird problem that I can't manage to describe in one word, so I'm having trouble searching for a solution. My monitors sometimes go black for a tenth of a second. Other times, they show the image shifted a few centimeters to the left or to the right. This happens on both of my monitors, but not necessarily at the same time. I would say it happens about once a minute, unless under heavy load, in which case it can happen every second or so. Interestingly, heavy CPU/memory usage can also cause this, not just heavy GPU usage. This only happens when they are both at 1920x1080, not when one of them, or both, are at a lower resolution. It also happens when they are in mirrored mode instead of extended desktop mode. My GPU is obviously not at full clock speed most of the time: this happens at 350 MHz as well as at 1200 MHz. So it doesn't seem like a matter of too little performance. I'm not seeing any traditional artefacts, even when I use MSI Kombustor, only this kind of full-screen glitches. CPU stressing software isn't reporting any issues either. I'm thinking maybe the connection between my CPU and my PCH isn't fast enough, but I can't find anyone with the same problem to confirm that. I'd rather not invest in a discrete GPU without being certain it will fix something. Does anyone know how to search for this, or even better, does anyone have a solution? My full specs are below. Thanks in advance! Specs: ASUS P8Z77-M Intel Core i5-3570K (with Intel HD 4000 Graphics) 2x4 GB AMD Performance Edition memory Corsair Force 3 Series Rev. B 120GB SSD Maxtor 200GB HD Lite-On DVD-RW Antec 350 Watt PSU 64-bit Windows 7 Professional 2x Iiyama ProLite E2208HDS display

    Read the article

  • Firefox: Clear History Is SUPER EFFECTIVE?

    - by acidzombie24
    I'm seeing a performance problem on certain sites (like gmail) which clearing the history should not affect. Is this a website problem or a firefox problem and what can i do to fix it w/o clearing my history? Also as a webdeveloper i am interested in how to make this happen (or not happen). I'm using firefox 8 and i confirmed the problem by copying my profile to firefox 11 (portable). To reproduce go to gmail.com and sign in. Have your task manager open. Once you click signin or hit enter gmail will bring up your emails. Keep your eye on the CPU usage. I checked and right now on this machine its using all my CPU for 22seconds!!!! Yes. 22 seconds. Once i cleared my "browser & download history" Its <6seconds. WTF. I have no idea why or how the size of history and CPU usage when loading up gmail are correlated. I have firefox setup so it never clears the history. But... 22seconds is a disaster. Can someone explain why this is happening or a fix that isnt clearing my history? I tried visiting a few websites and only gmail eats up that much CPU. Most websites only take <5sec of max CPU. So maybe this is a gmail problem? Or a firefox problem that gmail happens to hit? I still dont understand why it happens. -edit- I forgot to mention places.sqlite is 90mb. I dont think that matters. I have a sqlite file 400mb which is pretty much 2 large tables. It has no performance issues

    Read the article

  • Joomla performance problems on AWS

    - by Bobby Jack
    I'm running a site on AWS with the following setup: Single m1.small instance (web server) Single RDS m1.small db Joomla 1.5 Generally, the site is performant, but is fairly low-traffic - say around 50-100 visits / hour. However, at peak time, we see about double that traffic. During peak time, pretty much every day: CPU usage on the web server slowly climbs to 100% CPU usage on the RDS server climbs quite quickly to about 30%, from an average of about 15 Database connections shoot up to about 140, from a normal average of about 2 or 3 The site is then occasionally unreachable, certainly according to pingdom monitoring. Does anyone recognise this behaviour? Can you point me in the right direction to begin investigating? Of course, RDS makes it difficult to do things like slow query logging, so I've started by regularly dumping the mysql process list into a file to see if there's anything I can spot there, but it would be good to have something more concrete to investigate. UPDATE At least, can someone confirm that I'm definitely right in saying that the level of traffic implies the problem must be a specific type of query taking way longer than it should to execute? This would happen if a table gets locked, and many queries need to write to it, right? For this very reason, I've already changed the __session table type to InnoDB.

    Read the article

  • ActiveMQ Configuration with KahaDB

    - by xeraa
    We are using ActiveMQ 5.6.0 with KahaDB. It has produced quite some log files, which is to be expected with our infrastructure, looking like this: $ ll -h /opt/activemq/data/kahadb/ total 969M drwxr-xr-x 2 root root 4.0K Nov 3 12:47 ./ drwxr-xr-x 3 activemq activemq 4.0K Sep 24 12:12 ../ -rw-r--r-- 1 root root 39M Oct 16 07:57 db-202.log -rw-r--r-- 1 root root 38M Oct 16 07:57 db-203.log -rw-r--r-- 1 root root 33M Oct 17 08:12 db-238.log ... No more messages were processed, when we ran into the 1GB temp usage limit. Or that's what we are assuming, is that correct? The configuration looks like this: <systemUsage> <systemUsage> <memoryUsage> <memoryUsage limit="512mb"/> </memoryUsage> <storeUsage> <storeUsage limit="3 gb"/> </storeUsage> <tempUsage> <tempUsage limit="1 gb"/> </tempUsage> </systemUsage> </systemUsage> After cleaning up the log files and being way below the limits, still no messages were consumed by AMQ. Only when we manually purged a route, messages were starting to be delivered again. So we need to ensure, that the KahaDB log size always stays below the temp usage, right? And that delivery was not picked up after fixing that is a bug or are there any other steps to be taken?

    Read the article

  • Apache APC (Windows) Can I optimize these APC settings more?

    - by ar099968
    I would like to optimize APC some more but I am not sure where I could do something. First here is the stats after 1 week of running with the current configuration: General Cache Information APC Version 3.1.9 PHP Version 5.4.4 APC Host XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX Server Software Apache Shared Memory 1 Segment(s) with 128.0 MBytes (IPC shared memory, Windows Slim RWLOCK (native) locking) Start Time 2014/06/08 05:00:00 Uptime 6 days, 11 hours and 55 minutes File Upload Support 1 Host Status Diagrams Memory Usage Free: 99.7 MBytes (77.9%) Used: 28.3 MBytes (22.1%) Hits & Misses Hits: 510818 (99.9%) Misses: 608 (0.1%) Detailed Memory Usage and Fragmentation Fragmentation: 0.60% (609.8 KBytes out of 99.7 MBytes in 83 fragments) File Cache Information Cached Files 693 ( 35.4 MBytes) Hits 5143359 Misses 1087 Request Rate (hits, misses) 13.24 cache requests/second Hit Rate 13.24 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.01 cache requests/second Cache full count 0 User Cache Information Cached Variables 0 ( 0.0 Bytes) Hits 0 Misses 0 Request Rate (hits, misses) 0.00 cache requests/second Hit Rate 0.00 cache requests/second Miss Rate 0.00 cache requests/second Insert Rate 0.00 cache requests/second Cache full count 0 Runtime Settings apc.cache_by_default 1 apc.canonicalize 1 apc.coredump_unmap 0 apc.enable_cli 0 apc.enabled 1 apc.file_md5 0 apc.file_update_protection 2 apc.filters -/apc.php$, -/apc_clean.php$, -.tpl.cache.php$, -.tpl.php$, -.string.cache.php$, -.string.php$ apc.gc_ttl 3600 apc.include_once_override 0 apc.lazy_classes 0 apc.lazy_functions 0 apc.max_file_size 2M apc.num_files_hint 7000 apc.preload_path apc.report_autofilter 0 apc.rfc1867 0 apc.rfc1867_freq 0 apc.rfc1867_name APC_UPLOAD_PROGRESS apc.rfc1867_prefix upload_ apc.rfc1867_ttl 3600 apc.serializer default apc.shm_segments 1 apc.shm_size 128M apc.shm_strings_buffer 4M apc.slam_defense 0 apc.stat 1 apc.stat_ctime 0 apc.ttl 7200 apc.use_request_time 1 apc.user_entries_hint 4096 apc.user_ttl 7200 apc.write_lock 1

    Read the article

  • Missing Memory on Windows Server 2008

    - by Chris Lively
    I have a windows 2008 x64 server with 8GB of RAM installed. Task Manager and Resource Monitor both insist that 7.5GB of the RAM is in use. However, the memory list under Processes (Memory Private Bytes) doesn't add up. I do have Show Processes from all users checked and hand adding the numbers I come up with about 3.5GB of RAM. I also looked at the latest copy of SysInternals Process Explorer. And neither the Private Bytes or Working Set adds up to more than about 3.5GB of RAM in use. What's going on? ===== Update: I bounced the server to see what would happen with the memory utilization. After boot and regular operations began it sat at 3GB of RAM usage. 18 hours later, it's back up to 6.8GB of usage with no indication as to where the additional 3.5GB or so of RAM is being used. Here are links to screen shots of the resource monitor and task manager: Resource Monitor Task Manager Update 2: Well, I believe I located the problem. When I detached one of the larger databases from my sql server the amount of ram shown as "in use" dropped drastically. The Memory Private Bytes count barely moved. So I'm guessing that SQL server has some way of allocating memory where it doesn't really show up in any of the monitors. I went further and created a new database file, then transferred all of the data from the one I detached. Even though it has the same data, and the same transactions going through it, the memory in use has stayed low. Maybe there was some corruption in the DB? I'll leave it to the DB gods and go searching for another "problem" ;)

    Read the article

  • Why is my browser using so much memory?

    - by Steve
    Hi. I've recently had problems with Firefox running very slowly when I have many tabs open; say 20 tabs. My whole system would slow down. I decided to give Google Chrome a try, and it started out fine. But lately I am finding that it too, slows down my whole system. Looking at Task Manager, chrome.exe is using about 250MB of memory in about 6 different entries in task manager. However, when I shut Chrome down, memory usage is reduced by about 600MB. How can this be? (shows drop in memory usage after ending Chrome.) When my system locks up with Chrome having many tabs open, it takes 10 seconds to load the Start Menu, 10 seconds to expand All Programs, and each folder and subfolder, and 30 seconds for the program to be highlighted under my mouse. It also takes 10 seconds to switch to Notepad. Why is Chrome appearing to use so much more memory than Task Manager indicates? Why is my pagefile being used when I have around 1.1GB of memory? Can I set Chrome to run in RAM and not in the pagefile? How can 20 tabs use 600MB? That's 30MB per tab. Thanks for your help.

    Read the article

  • How to fix Windows 7 device removal notification loop

    - by Barry Kelly
    Bit of an odd one this. One of our PCs is getting caught in a loop some time after being turned on, usually after a USB storage device has been attached - sometimes an iPod, sometimes a GPS. Specifically, Windows Explorer starts showing a drive icon and letter (E:, as of right now) for the System partition (the small hidden one at the start of the boot drive). Then, the icon disappears. Then it reappears again. And disappears. It does this very quickly, at what looks like maybe 50 times a second. CPU usage in this loop is also very high; averages about 66%. This machine has an i7 920 CPU, which is quad core with hyperthreading; so this usage rate works out to about 5 100% busy threads, along with whatever normal idle load is (particularly Task Manager itself). Inspecting with Process Explorer shows that the device removal notification infrastructure has gone berserk. The threads in system service processes (i.e. apart from Windows Explorer) which are using all the CPU power relate to device notification. The Disk Management MMC snap-in also fails to run when the loop starts. The only way to break the loop, it seems, is to reboot the machine. Anyone seen anything similar to this, and know of a way to fix it? Machine details: Windows 7 x64, fully patched i7 920, 12GB RAM Intel SSD 80GB (X25-M, I believe; not G2) 2TB 5.2K disk for bulk storage AMD HD 5870 Further hardware details await. I'm going to go through and update all drivers I can find.

    Read the article

  • Windows Server 2008 Alerting to Low memory

    - by t1nt1n
    I have a file and print server running on Windows 2008 R2 fully patched in a VSphere environment (ESXi 5.1 fully updated). Every evening between 19:20 and 19:30 our monitoring software reported that the available memory is 1% and performance is dire. There is nothing in the event logs to point to an issue. At this point in the evening I am general the only user on the system to check to see why these alerts are going off. Things I have done; Checked to see if any backups are running – None at all. Checked Scheduled tasks – None before or during this time period. Moved the VM to another host. AV is disabled to rule out that as the issue. The server does not have any problems during the day with memory when fully loaded with about 50 users. The server did have 4GB ram provisioned but I have increased this to 5Gb. Running PrefMon at the time (I will save the graphs tonight) There very little CPU usage at the time but RAM usage goes up.

    Read the article

  • windows server 2003 speed issues

    - by farzinSH
    I have a HP server with windows server 2003 and 50 windows XP clients. Since a week and a half the networks speed suddenly drop 2-3 times per day. It gets so slow that none of the clients could work with the HIS program installed on them. We tried so many different things such as replacing the hubs,switches and even some wires. Every time one of these changes solves the problem and the network goes back to its normal state. I checked everything. Even when I disconnected all the clients from the server and connected it to just one computer the problem still remained for 2 hours. I just narrowed down the problem to the couple of likely speculations as follows: viruses? (Updated Kaspersky running on the server shows none) server hardware failure? Physical memory usage on the server? (Because the last time the problem occurred none of the changes above solved the issue so I restarted the server an checked the physical memory usage which was 2 GBs. But I noticed it's increasing over time to over 9 GBs...the server has 16 GBs of RAM.) I surfed the internet and got nothing. Any help would do us a lot....thanks in advance

    Read the article

  • very diferent results from df after few seconds

    - by tatus2
    When the backup moves the files from one to the other server the results from df changing every some seconds in impossible manner. On source host is running rsync. On destination host I'm running every few seconds following command: echo `date` `df|grep md0` Results are below: Sat Jun 29 23:57:12 CEST 2013 /dev/md0 4326425568 579316100 3527339636 15% /MD0 Sat Jun 29 23:57:14 CEST 2013 /dev/md0 4326425568 852513700 3254142036 21% /MD0 Sat Jun 29 23:57:15 CEST 2013 /dev/md0 4326425568 969970340 3136685396 24% /MD0 Sat Jun 29 23:57:17 CEST 2013 /dev/md0 4326425568 1255222180 2851433556 31% /MD0 Sat Jun 29 23:57:20 CEST 2013 /dev/md0 4326425568 1276006720 2830649016 32% /MD0 Sat Jun 29 23:57:24 CEST 2013 /dev/md0 4326425568 1355440016 2751215720 34% /MD0 Sat Jun 29 23:57:26 CEST 2013 /dev/md0 4326425568 1425090960 2681564776 35% /MD0 Sat Jun 29 23:57:27 CEST 2013 /dev/md0 4326425568 1474601872 2632053864 36% /MD0 Sat Jun 29 23:57:28 CEST 2013 /dev/md0 4326425568 1493627384 2613028352 37% /MD0 Sat Jun 29 23:57:32 CEST 2013 /dev/md0 4326425568 615934400 3490721336 15% /MD0 Sat Jun 29 23:57:33 CEST 2013 /dev/md0 4326425568 636071360 3470584376 16% /MD0 as you can see I start from USE of 15% and after 15 seconds I'm at 37% (I don't need to mention that the backup can not copy this huge amount of data in so short time). After ~20 sec the cycle closes. I'm again roughly by the same usage as earlier. The value is reasonable ca. 35 Mb were copied. Can somebody explain me what is going on? Does df only make an estimation of usage instead of used value?

    Read the article

  • Very diferent results from df after a few seconds

    - by tatus2
    When the backup moves the files from one server to the other the results from df change every few seconds in an impossible manner. The source host is running rsync. On the destination host I'm running the following command every few seconds: echo `date` `df|grep md0` Results are below: Sat Jun 29 23:57:12 CEST 2013 /dev/md0 4326425568 579316100 3527339636 15% /MD0 Sat Jun 29 23:57:14 CEST 2013 /dev/md0 4326425568 852513700 3254142036 21% /MD0 Sat Jun 29 23:57:15 CEST 2013 /dev/md0 4326425568 969970340 3136685396 24% /MD0 Sat Jun 29 23:57:17 CEST 2013 /dev/md0 4326425568 1255222180 2851433556 31% /MD0 Sat Jun 29 23:57:20 CEST 2013 /dev/md0 4326425568 1276006720 2830649016 32% /MD0 Sat Jun 29 23:57:24 CEST 2013 /dev/md0 4326425568 1355440016 2751215720 34% /MD0 Sat Jun 29 23:57:26 CEST 2013 /dev/md0 4326425568 1425090960 2681564776 35% /MD0 Sat Jun 29 23:57:27 CEST 2013 /dev/md0 4326425568 1474601872 2632053864 36% /MD0 Sat Jun 29 23:57:28 CEST 2013 /dev/md0 4326425568 1493627384 2613028352 37% /MD0 Sat Jun 29 23:57:32 CEST 2013 /dev/md0 4326425568 615934400 3490721336 15% /MD0 Sat Jun 29 23:57:33 CEST 2013 /dev/md0 4326425568 636071360 3470584376 16% /MD0 As you can see I start from USE of 15% and after 15 seconds I'm at 37% (I don't need to mention that the backup can not copy this huge amount of data in such a short time). After ~20 seconds the cycle closes. I'm again roughly at the same usage as earlier. The value is reasonable, ca. 35 Mb were copied. Can somebody explain to me what is going on? Does df only make an estimation of usage instead of used value?

    Read the article

  • Solving embarassingly parallel problems using Python multiprocessing

    - by gotgenes
    How does one use multiprocessing to tackle embarrassingly parallel problems? Embarassingly parallel problems typically consist of three basic parts: Read input data (from a file, database, tcp connection, etc.). Run calculations on the input data, where each calculation is independent of any other calculation. Write results of calculations (to a file, database, tcp connection, etc.). We can parallelize the program in two dimensions: Part 2 can run on multiple cores, since each calculation is independent; order of processing doesn't matter. Each part can run independently. Part 1 can place data on an input queue, part 2 can pull data off the input queue and put results onto an output queue, and part 3 can pull results off the output queue and write them out. This seems a most basic pattern in concurrent programming, but I am still lost in trying to solve it, so let's write a canonical example to illustrate how this is done using multiprocessing. Here is the example problem: Given a CSV file with rows of integers as input, compute their sums. Separate the problem into three parts, which can all run in parallel: Process the input file into raw data (lists/iterables of integers) Calculate the sums of the data, in parallel Output the sums Below is traditional, single-process bound Python program which solves these three tasks: #!/usr/bin/env python # -*- coding: UTF-8 -*- # basicsums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file. """ import csv import optparse import sys def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) return cli_parser def parse_input_csv(csvfile): """Parses the input CSV and yields tuples with the index of the row as the first element, and the integers of the row as the second element. The index is zero-index based. :Parameters: - `csvfile`: a `csv.reader` instance """ for i, row in enumerate(csvfile): row = [int(entry) for entry in row] yield i, row def sum_rows(rows): """Yields a tuple with the index of each input list of integers as the first element, and the sum of the list of integers as the second element. The index is zero-index based. :Parameters: - `rows`: an iterable of tuples, with the index of the original row as the first element, and a list of integers as the second element """ for i, row in rows: yield i, sum(row) def write_results(csvfile, results): """Writes a series of results to an outfile, where the first column is the index of the original row of data, and the second column is the result of the calculation. The index is zero-index based. :Parameters: - `csvfile`: a `csv.writer` instance to which to write results - `results`: an iterable of tuples, with the index (zero-based) of the original row as the first element, and the calculated result from that row as the second element """ for result_row in results: csvfile.writerow(result_row) def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # gets an iterable of rows that's not yet evaluated input_rows = parse_input_csv(in_csvfile) # sends the rows iterable to sum_rows() for results iterable, but # still not evaluated result_rows = sum_rows(input_rows) # finally evaluation takes place as a chain in write_results() write_results(out_csvfile, result_rows) infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) Let's take this program and rewrite it to use multiprocessing to parallelize the three parts outlined above. Below is a skeleton of this new, parallelized program, that needs to be fleshed out to address the parts in the comments: #!/usr/bin/env python # -*- coding: UTF-8 -*- # multiproc_sums.py """A program that reads integer values from a CSV file and writes out their sums to another CSV file, using multiple processes if desired. """ import csv import multiprocessing import optparse import sys NUM_PROCS = multiprocessing.cpu_count() def make_cli_parser(): """Make the command line interface parser.""" usage = "\n\n".join(["python %prog INPUT_CSV OUTPUT_CSV", __doc__, """ ARGUMENTS: INPUT_CSV: an input CSV file with rows of numbers OUTPUT_CSV: an output file that will contain the sums\ """]) cli_parser = optparse.OptionParser(usage) cli_parser.add_option('-n', '--numprocs', type='int', default=NUM_PROCS, help="Number of processes to launch [DEFAULT: %default]") return cli_parser def main(argv): cli_parser = make_cli_parser() opts, args = cli_parser.parse_args(argv) if len(args) != 2: cli_parser.error("Please provide an input file and output file.") infile = open(args[0]) in_csvfile = csv.reader(infile) outfile = open(args[1], 'w') out_csvfile = csv.writer(outfile) # Parse the input file and add the parsed data to a queue for # processing, possibly chunking to decrease communication between # processes. # Process the parsed data as soon as any (chunks) appear on the # queue, using as many processes as allotted by the user # (opts.numprocs); place results on a queue for output. # # Terminate processes when the parser stops putting data in the # input queue. # Write the results to disk as soon as they appear on the output # queue. # Ensure all child processes have terminated. # Clean up files. infile.close() outfile.close() if __name__ == '__main__': main(sys.argv[1:]) These pieces of code, as well as another piece of code that can generate example CSV files for testing purposes, can be found on github. I would appreciate any insight here as to how you concurrency gurus would approach this problem. Here are some questions I had when thinking about this problem. Bonus points for addressing any/all: Should I have child processes for reading in the data and placing it into the queue, or can the main process do this without blocking until all input is read? Likewise, should I have a child process for writing the results out from the processed queue, or can the main process do this without having to wait for all the results? Should I use a processes pool for the sum operations? If yes, what method do I call on the pool to get it to start processing the results coming into the input queue, without blocking the input and output processes, too? apply_async()? map_async()? imap()? imap_unordered()? Suppose we didn't need to siphon off the input and output queues as data entered them, but could wait until all input was parsed and all results were calculated (e.g., because we know all the input and output will fit in system memory). Should we change the algorithm in any way (e.g., not run any processes concurrently with I/O)?

    Read the article

  • Why should you choose Oracle WebLogic 12c instead of JBoss EAP 6?

    - by Ricardo Ferreira
    In this post, I will cover some technical differences between Oracle WebLogic 12c and JBoss EAP 6, which was released a couple days ago from Red Hat. This article claims to help you in the evaluation of key points that you should consider when choosing for an Java EE application server. In the following sections, I will present to you some important aspects that most customers ask us when they are seriously evaluating for an middleware infrastructure, specially if you are considering JBoss for some reason. I would suggest that you keep the following question in mind while you are reading the points: "Why should I choose JBoss instead of WebLogic?" 1) Multi Datacenter Deployment and Clustering - D/R ("Disaster & Recovery") architecture support is embedded on the WebLogic Server 12c product. JBoss EAP 6 on the other hand has no direct D/R support included, Red Hat relies on third-part tools with higher prices. When you consider a middleware solution to host your business critical application, you should worry with every architectural aspect that are related with the solution. Fail-over support is one little aspect of a truly reliable solution. If you do not worry about D/R, your solution will not be reliable. Having said that, with Red Hat and JBoss EAP 6, you have this extra cost that will increase considerably the total cost of ownership of the solution. As we commonly hear from analysts, open-source are not so cheaper when you start seeing the big picture. - WebLogic Server 12c supports advanced LAN clustering, detection of death servers and have a common alert framework. JBoss EAP 6 on the other hand has limited LAN clustering support with no server death detection. They do not generate any alerts when servers goes down (only if you buy JBoss ON which is a separated technology, but until now does not support JBoss EAP 6) and manual intervention are required when servers goes down. In most cases, admin people must rely on "kill -9", "tail -f someFile.log" and "ps ax | grep java" commands to manage failures and clustering anomalies. - WebLogic Server 12c supports the concept of Node Manager, which is a separated process that runs on the physical | virtual servers that allows extend the administration of the cluster to WebLogic managed servers that are often distributed across multiple machines and geographic locations. JBoss EAP 6 on the other hand has no equivalent technology. Whole server instances must be managed individually. - WebLogic Server 12c Node Manager supports Coherence to boost performance when managing servers. JBoss EAP 6 on the other hand has no similar technology. There is no way to coordinate JBoss and infiniband instances provided by JBoss using high throughput and low latency protocols like InfiniBand. The Node Manager feature also allows another very important feature that JBoss EAP lacks: secure the administration. When using WebLogic Node Manager, all the administration tasks are sent to the managed servers in a secure tunel protected by a certificate, which means that the transport layer that separates the WebLogic administration console from the managed servers are secured by SSL. - WebLogic Server 12c are now integrated with OTD ("Oracle Traffic Director") which is a web server technology derived from the former Sun iPlanet Web Server. This software complements the web server support offered by OHS ("Oracle HTTP Server"). Using OTD, WebLogic instances are load-balanced by a high powerful software that knows how to handle SDP ("Socket Direct Protocol") over InfiniBand, which boost performance when used with engineered systems technologies like Oracle Exalogic Elastic Cloud. JBoss EAP 6 on the other hand only offers support to Apache Web Server with custom modules created to deal with JBoss clusters, but only across standard TCP/IP networks.  2) Application and Runtime Diagnostics - WebLogic Server 12c have diagnostics capabilities embedded on the server called WLDF ("WebLogic Diagnostic Framework") so there is no need to rely on third-part tools. JBoss EAP 6 on the other hand has no diagnostics capabilities. Their only diagnostics tool is the log generated by the application server. Admin people are encouraged to analyse thousands of log lines to find out what is going on. - WebLogic Server 12c complement WLDF with JRockit MC ("Mission Control"), which provides to administrators and developers a complete insight about the JVM performance, behavior and possible bottlenecks. WebLogic Server 12c also have an classloader analysis tool embedded, and even a log analyzer tool that enables administrators and developers to view logs of multiple servers at the same time. JBoss EAP 6 on the other hand relies on third-part tools to do something similar. Again, only log searching are offered to find out whats going on. - WebLogic Server 12c offers end-to-end traceability and monitoring available through Oracle EM ("Enterprise Manager"), including monitoring of business transactions that flows through web servers, ESBs, application servers and database servers, all of this with high deep JVM analysis and diagnostics. JBoss EAP 6 on the other hand, even using JBoss ON ("Operations Network"), which is a separated technology, does not support those features. Red Hat relies on third-part tools to provide direct Oracle database traceability across JVMs. One of those tools are Oracle EM for non-Oracle middleware that manage JBoss, Tomcat, Websphere and IIS transparently. - WebLogic Server 12c with their JRockit support offers a tool called JRockit Flight Recorder, which can give developers a complete visibility of a certain period of application production monitoring with zero extra overhead. This automatic recording allows you to deep analyse threads latency, memory leaks, thread contention, resource utilization, stack overflow damages and GC ("Garbage Collection") cycles, to observe in real time stop-the-world phenomenons, generational, reference count and parallel collects and mutator threads analysis. JBoss EAP 6 don't even dream to support something similar, even because they don't have their own JVM. 3) Application Server Administration - WebLogic Server 12c offers a complete administration console complemented with scripting and macro-like recording capabilities. A single WebLogic console can managed up to hundreds of WebLogic servers belonging to the same domain. JBoss EAP 6 on the other hand has a limited console and provides a XML centric administration. JBoss, after ten years, started the development of a rudimentary centralized administration that still leave a lot of administration tasks aside, so admin people and developers must touch scripts and XML configuration files for most advanced and even simple administration tasks. This lead applications to error prone and risky deployments. Even using JBoss ON, JBoss EAP are not able to offer decent administration features for admin people which must be high skilled in JBoss internal architecture and its managing capabilities. - Oracle EM is available to manage multiple domains, databases, application servers, operating systems and virtualization, with a complete end-to-end visibility. JBoss ON does not provide management capabilities across the complete architecture, only basic monitoring. Even deployment must be done aside JBoss ON which does no integrate well with others softwares than JBoss. Until now, JBoss ON does not supports JBoss EAP 6, so even their minimal support for JBoss are not available for JBoss EAP 6 leaving customers uncovered and subject to high skilled JBoss admin people. - WebLogic Server 12c has the same administration model whatever is the topology selected by the customer. JBoss EAP 6 on the other hand differentiates between two operational models: standalone-mode and domain-mode, that are not consistent with each other. Depending on the mode used, the administration skill is different. - WebLogic Server 12c has no point-of-failures processes, and it does not need to define any specialized server. Domain model in WebLogic is available for years (at least ten years or more) and is production proven. JBoss EAP 6 on the other hand needs special processes to garantee JBoss integrity, the PC ("Process-Controller") and the HC ("Host-Controller"). Different from WebLogic, the domain model in JBoss is quite new (one year at tops) of maturity, and need to mature considerably until start doing things like WebLogic domain model does. - WebLogic Server 12c supports parallel deployment model which enables some artifacts being deployed at the same time. JBoss EAP 6 on the other hand does not have any similar feature. Every deployment are done atomically in the containers. This means that if you have a huge EAR (an EAR of 120 MB of size for instance) and deploy onto JBoss EAP 6, this EAR will take some minutes in order to starting accept thread requests. The same EAR deployed onto WebLogic Server 12c will reduce the deployment time at least in 2X compared to JBoss. 4) Support and Upgrades - WebLogic Server 12c has patch management available. JBoss EAP 6 on the other hand has no patch management available, each JBoss EAP instance should be patched manually. To achieve such feature, you need to buy a separated technology called JBoss ON ("Operations Network") that manage this type of stuff. But until now, JBoss ON does not support JBoss EAP 6 so, in practice, JBoss EAP 6 does not have this feature. - WebLogic Server 12c supports previuous WebLogic domains without any reconfiguration since its kernel is robust and mature since its creation in 1995. JBoss EAP 6 on the other hand has a proven lack of supportability between JBoss AS 4, 5, 6 and 7. Different kernels and messaging engines were implemented in JBoss stack in the last five years reveling their incapacity to create a well architected and proven middleware technology. - WebLogic Server 12c has patch prescription based on customer configuration. JBoss EAP 6 on the other hand has no such capability. People need to create ticket supports and have their installations revised by Red Hat support guys to gain some patch prescription from them. - Oracle WebLogic Server independent of the version has 8 years of support of new patches and has lifetime release of existing patches beyond that. JBoss EAP 6 on the other hand provides patches for a specific application server version up to 5 years after the release date. JBoss EAP 4 and previous versions had only 4 years. A good question that Red Hat will argue to answer is: "what happens when you find issues after year 5"?  5) RAC ("Real Application Clusters") Support - WebLogic Server 12c ships with a specific JDBC driver to leverage Oracle RAC clustering capabilities (Fast-Application-Notification, Transaction Affinity, Fast-Connection-Failover, etc). Oracle JDBC thin driver are also available. JBoss EAP 6 on the other hand ships only the standard Oracle JDBC thin driver. Load balancing with Oracle RAC are not supported. Manual intervention in case of planned or unplanned RAC downtime are necessary. In JBoss EAP 6, situation does not reestablish automatically after downtime. - WebLogic Server 12c has a feature called Active GridLink for Oracle RAC which provides up to 3X performance on OLTP applications. This seamless integration between WebLogic and Oracle database enable more value added to critical business applications leveraging their investments in Oracle database technology and Oracle middleware. JBoss EAP 6 on the other hand has no performance gains at all, even when admin people implement some kind of connection-pooling tuning. - WebLogic Server 12c also supports transaction and web session affinity to the Oracle RAC, which provides aditional gains of performance. This is particularly interesting if you are creating a reliable solution that are distributed not only in an LAN cluster, but into a different data center. JBoss EAP 6 on the other hand has no such support. 6) Standards and Technology Support - WebLogic Server 12c is fully Java EE 6 compatible and production ready since december of 2011. JBoss EAP 6 on the other hand became fully compatible with Java EE 6 only in the community version after three months, and production ready only in a few days considering that this article was written in June of 2012. Red Hat says that they are the masters of innovation and technology proliferation, but compared with Oracle and even other proprietary vendors like IBM, they historically speaking are lazy to deliver the most newest technologies and standards adherence. - Oracle is the steward of Java, driving innovation into the platform from commercial and open-source vendors. Red Hat on the other hand does not have its own JVM and relies on third-part JVMs to complete their application server offer. 95% of Red Hat customers are using Oracle HotSpot as JVM, which means that without Oracle involvement, their support are limited exclusively to the application server layer and we all know that most problems are happens in the JVM layer. - WebLogic Server 12c supports natively JDK 7, which empower developers to explore the maximum of the Java platform productivity when writing code. This feature differentiate WebLogic from others application servers (except GlassFish that are also managed by Oracle) because the usage of JDK 7 introduce such remarkable productivity features like the "try-with-resources" enhancement, catching multiple exceptions with one try block, Strings in the switch statements, JVM improvements in terms of JDBC, I/O, networking, security, concurrency and of course, the most important feature of Java 7: native support for multiple non-Java languages. More features regarding JDK 7 can be found here. JBoss EAP 6 on the other hand does not support JDK 7 officially, they comment in their community version that "Java SE 7 can be used with JBoss 7" which does not gives you any guarantees of enterprise support for JDK 7. - Oracle WebLogic Server 12c supports integration with Spring framework allowing Spring applications to use WebLogic special transaction manager, exposing bean interfaces to WebLogic MBeans to take advantage of all WebLogic monitoring and administration advantages. JBoss EAP 6 on the other hand has no special integration with Spring. In fact, Red Hat offers a suspicious package called "JBoss Web Platform" that in theory supports Spring, but in practice this package does not offers any special integration. It is just a facility for Red Hat customers to have support from both JBoss and Spring technology using the same customer support. 7) Lightweight Development - Oracle WebLogic Server 12c and Oracle GlassFish are completely integrated and can share applications without any modifications. Starting with the 12c version, WebLogic now understands natively GlassFish deployment descriptors and specific configurations in order to offer you a truly and reliable migration path from a community Java EE application server to a enterprise middleware product like WebLogic. JBoss EAP 6 on the other hand has no support to natively reuse an existing (or still in development) application from JBoss AS community server. Users of JBoss suffer of critical issues during deployment time that includes: changing the libraries and dependencies of the application, patching the DTD or XSD deployment descriptors, refactoring of the application layers due classloading issues and anomalies, rebuilding of persistence, business and web layers due issues with "usage of the certified version of an certain dependency" or "frameworks that Red Hat potentially does not recommend" etc. If you have the culture or enterprise IT directive of developing Java EE applications using community middleware to in a certain future, transition to enterprise (supported by a vendor) middleware, Oracle WebLogic plus Oracle GlassFish offers you a more sustainable solution. - WebLogic Server 12c has a very light ZIP distribution (less than 165 MB). JBoss EAP 6 ZIP size is around 130 MB, together with JBoss ON you have more 100 MB resulting in a higher download footprint. This is particularly interesting if you plan to use automated setup of application server instances (for example, to rapidly setup a development or staging environment) using Maven or Hudson. - WebLogic Server 12c has a complete integration with Maven allowing developers to setup WebLogic domains with few commands. Tasks like downloading WebLogic, installation, domain creation, data sources deployment are completely integrated. JBoss EAP 6 on the other hand has a limited offer integration with those tools.  - WebLogic Server 12c has a startup mode called WLX that turns-off EJB, JMS and JCA containers leaving enabled only the web container with Java EE 6 web profile. JBoss EAP 6 on the other hand has no such feature, you need to disable manually the containers that you do not want to use. - WebLogic Server 12c supports fastswap, which enables you to change classes without redeployment. This is particularly interesting if you are developing patches for the application that is already deployed and you do not want to redeploy the entire application. This is the same behavior that most application servers offers to JSP pages, but with WebLogic Server 12c, you have the same feature for Java classes in general. JBoss EAP 6 on the other hand has no such support. Even JBoss EAP 5 does not support this until now. 8) JMS and Messaging - WebLogic Server 12c has a proven and high scalable JMS implementation since its initial release in 1995. JBoss EAP 6 on the other hand has a still immature technology called HornetQ, which was introduced in JBoss EAP 5 replacing everything that was implemented in the previous versions. Red Hat loves to introduce new technologies across JBoss versions, playing around with customers and their investments. And when they are asked about why they have changed the implementation and caused such a mess, their answer is always: "the previous implementation was inadequate and not aligned with the community strategy so we are creating a new a improved one". This Red Hat practice leads to uncomfortable investments that in a near future (sometimes less than a year) will be affected in someway. - WebLogic Server 12c has troubleshooting and monitoring features included on the WebLogic console and WLDF. JBoss EAP 6 on the other hand has no direct monitoring on the console, activity is reflected only on the logs, no debug logs available in case of JMS issues. - WebLogic Server 12c has extremely good performance and scalability. JBoss EAP 6 on the other hand has a JMS storage mechanism relying on Oracle database or MySQL. This means that if an issue in production happens and Red Hat affirms that an performance issue is happening due to database problems, they will not support you on the performance issue. They will orient you to call Oracle instead. - WebLogic Server 12c supports messaging enterprise features like SAF ("Store and Forward"), Distributed Queues/Topics and Foreign JMS providers support that leverage JMS implementations without compromise developer code making things completely transparent. JBoss EAP 6 on the other hand do not even dream to support such features. 9) Caching and Grid - Coherence, which is the leading and most mature data grid technology from Oracle, is available since early 2000 and was integrated with WebLogic in 2009. Coherence and WebLogic clusters can be both managed from WebLogic administrative console. Even Node Manager supports Coherence. JBoss on the other hand discontinued JBoss Cache, which was their caching implementation just like they did with the messaging implementation (JBossMQ) which was a issue for long term customers. JBoss EAP 6 ships InfiniSpan version 1.0 which is immature and lack a proven record of successful cases and reliability. - WebLogic Server 12c has a feature called ActiveCache which uses Coherence to, without any code changes, replicate HTTP sessions from both WebLogic and other application servers like JBoss, Tomcat, Websphere, GlassFish and even Microsoft IIS. JBoss EAP 6 on the other hand does have such support and even when they do in the future, they probably will support only their own application server. - Coherence can be used to manage both L1 and L2 cache levels, providing support to Oracle TopLink and others JPA compliant implementations, even Hibernate. JBoss EAP 6 and Infinispan on the other hand supports only Hibernate. And most important of all: Infinispan does not have any successful case of L1 or L2 caching level support using Hibernate, which lead us to reflect about its viability. 10) Performance - WebLogic Server 12c is certified with Oracle Exalogic Elastic Cloud and can run unchanged applications at this engineered system. This approach can benefit customers from Exalogic optimization's of both kernel and JVM layers to boost performance in terms of 10X for web, OLTP, JMS and grid applications. JBoss EAP 6 on the other hand has no investment on engineered systems: customers do not have the choice to deploy on a Java ultra fast system if their project becomes relevant and performance issues are detected. - WebLogic Server 12c maintains a performance gain across each new release: starting on WebLogic 5.1, the overall performance gain has been close to 4X, which close to a 20% gain release by release. JBoss on the other hand does not provide SPECJAppServer or SPECJEnterprise performance benchmarks. Their so called "performance gains" remains hidden in their customer environments, which lead us to think if it is true or not since we will never get access to those environments. - WebLogic Server 12c has industry performance benchmarks with submissions across platforms and configurations leading SPECJ. Oracle WebLogic leads SPECJAppServer performance in multiple categories, fitting all customer topologies like: dual-node, single-node, multi-node and multi-node with RAC. JBoss... again, does not provide any SPECJAppServer performance benchmarks. - WebLogic Server 12c has a feature called work manager which allows your application to embrace new performance levels based on critical resource utilization of the CPUs usage. Work managers prioritizes work and allocates threads based on an execution model that takes into account administrator-defined parameters and actual run-time performance and throughput. JBoss EAP 6 on the other hand has no compared feature and probably they never will. Not supporting such feature like work managers, JBoss EAP 6 forces admin people and specially developers to uncover performance gains in a intrusive way, rewriting the code and doing performance refactorings. 11) Professional Services Support - WebLogic Server 12c and any other technology sold by Oracle give customers the possibility of hire OCS ("Oracle Consulting Services") to manage critical scenarios, deployment assistance of new applications, high skilled consultancy of architecture, best practices and people allocation together with customer teams. All OCS services are available without any restrictions, having the customer bought software from Oracle or just starting their implementation before any acquisition. JBoss EAP 6 or Red Hat to be more specifically, only offers professional services if you buy subscriptions from them. If you are developing a new critical application for your business and need the help of Red Hat for a serious issue or architecture decision, they will probably say: "OK... I can help you but after you buy subscriptions from me". Red Hat also does not allows their professional services consultants to manage environments that uses community based software. They will probably force you to first buy a subscription, download their "enterprise" version and them, optionally hire their consultants. - Oracle provides you our university to educate your team into our technologies, including of course specialized trainings of WebLogic application server. At any time and location, you can hire Oracle to train your team so you get trustful knowledge according to your specific needs. Certifications for the products are also available if your technical people desire to differentiate themselves as professionals. Red Hat on the other hand have a limited pool of resources to train your team in their technologies. Basically they are selling training and certification for RHEL ("Red Hat Enterprise Linux") but if you demand more specialized training in JBoss middleware, they will probably connect you to some "certified" partner localized training since they are apparently discontinuing their education center, at least here in Brazil. They were not able to reproduce their success with RHEL education to their middleware division since they need first sell the subscriptions to after gives you specialized training. And again, they only offer you specialized training based on their enterprise version (EAP in the case of JBoss) which means that the courses will be a quite outdated. There are reports of developers that took official training's from Red Hat at this year (2012) and in a certain JBoss advanced course, Red Hat supposedly covered JBossMQ as the messaging subsystem, and even the printed material provided was based on JBossMQ since the training was created for JBoss EAP 4.3. 12) Encouraging Transparency without Ulterior Motives - WebLogic Server 12c like any other software from Oracle can be downloaded any time from anywhere, you should only possess an OTN ("Oracle Technology Network") credential and you can download any enterprise software how many times you want. And is not some kind of "trial" version. It is the official binaries that will be running for ever in your data center. Oracle does not encourages the usage of "specific versions" of our software. The binaries you buy from Oracle are the same binaries anyone in the world could download and use for testing and personal education. JBoss EAP 6 on the other hand are not available for download unless you buy a subscription and get access to the Red Hat enterprise repositories. If you need to test, learn or just start creating your application using Red Hat's middleware software, you should download it from the community website. You are not allowed to download the enterprise version that, according to Red Hat are more secure, reliable and robust. But no one of us want to start the development of a software with an unsecured, unreliable and not scalable middleware right? So what you do? You are "invited" by Red Hat to buy subscriptions from them to get access to the "cool" version of the software. - WebLogic Server 12c prices are publicly available in the Oracle website. If you want to know right now how much WebLogic will cost to your organization, just click here and get access to our price list. In the case of WebLogic, check out the "US Oracle Technology Commercial Price List". Oracle also encourages you to get in touch with a sales representative to discuss discounts that would make possible the investment into our technology. But you are not required to do this, only if you are interested in buying our technology or maybe you want to discuss some discount scenarios. JBoss EAP 6 on the other hand does not have its cost publicly available in Red Hat's website or in any other media, at least is not so easy to get such information. The only link you will possibly find in their website is a "Contact a Sales Representative" link. This is not a very good relationship between an customer and an vendor. This is not an example of transparency, mainly when the software are sold as open. In this situations, customers expects to see the software prices publicly available, so they can have the chance to decide, based on the existing features of the software, if the cost is fair or not. Conclusion Oracle WebLogic is the most mature, secure, reliable and scalable Java EE application server of the market, and have a proven record of success around the globe to prove it's majority. Don't lose the chance to discover today how WebLogic could fit your needs and sustain your global IT middleware strategy, no matter if your strategy are completely based on the Cloud or not.

    Read the article

  • 256 Windows Azure Worker Roles, Windows Kinect and a 90's Text-Based Ray-Tracer

    - by Alan Smith
    For a couple of years I have been demoing a simple render farm hosted in Windows Azure using worker roles and the Azure Storage service. At the start of the presentation I deploy an Azure application that uses 16 worker roles to render a 1,500 frame 3D ray-traced animation. At the end of the presentation, when the animation was complete, I would play the animation delete the Azure deployment. The standing joke with the audience was that it was that it was a “$2 demo”, as the compute charges for running the 16 instances for an hour was $1.92, factor in the bandwidth charges and it’s a couple of dollars. The point of the demo is that it highlights one of the great benefits of cloud computing, you pay for what you use, and if you need massive compute power for a short period of time using Windows Azure can work out very cost effective. The “$2 demo” was great for presenting at user groups and conferences in that it could be deployed to Azure, used to render an animation, and then removed in a one hour session. I have always had the idea of doing something a bit more impressive with the demo, and scaling it from a “$2 demo” to a “$30 demo”. The challenge was to create a visually appealing animation in high definition format and keep the demo time down to one hour.  This article will take a run through how I achieved this. Ray Tracing Ray tracing, a technique for generating high quality photorealistic images, gained popularity in the 90’s with companies like Pixar creating feature length computer animations, and also the emergence of shareware text-based ray tracers that could run on a home PC. In order to render a ray traced image, the ray of light that would pass from the view point must be tracked until it intersects with an object. At the intersection, the color, reflectiveness, transparency, and refractive index of the object are used to calculate if the ray will be reflected or refracted. Each pixel may require thousands of calculations to determine what color it will be in the rendered image. Pin-Board Toys Having very little artistic talent and a basic understanding of maths I decided to focus on an animation that could be modeled fairly easily and would look visually impressive. I’ve always liked the pin-board desktop toys that become popular in the 80’s and when I was working as a 3D animator back in the 90’s I always had the idea of creating a 3D ray-traced animation of a pin-board, but never found the energy to do it. Even if I had a go at it, the render time to produce an animation that would look respectable on a 486 would have been measured in months. PolyRay Back in 1995 I landed my first real job, after spending three years being a beach-ski-climbing-paragliding-bum, and was employed to create 3D ray-traced animations for a CD-ROM that school kids would use to learn physics. I had got into the strange and wonderful world of text-based ray tracing, and was using a shareware ray-tracer called PolyRay. PolyRay takes a text file describing a scene as input and, after a few hours processing on a 486, produced a high quality ray-traced image. The following is an example of a basic PolyRay scene file. background Midnight_Blue   static define matte surface { ambient 0.1 diffuse 0.7 } define matte_white texture { matte { color white } } define matte_black texture { matte { color dark_slate_gray } } define position_cylindrical 3 define lookup_sawtooth 1 define light_wood <0.6, 0.24, 0.1> define median_wood <0.3, 0.12, 0.03> define dark_wood <0.05, 0.01, 0.005>     define wooden texture { noise surface { ambient 0.2  diffuse 0.7  specular white, 0.5 microfacet Reitz 10 position_fn position_cylindrical position_scale 1  lookup_fn lookup_sawtooth octaves 1 turbulence 1 color_map( [0.0, 0.2, light_wood, light_wood] [0.2, 0.3, light_wood, median_wood] [0.3, 0.4, median_wood, light_wood] [0.4, 0.7, light_wood, light_wood] [0.7, 0.8, light_wood, median_wood] [0.8, 0.9, median_wood, light_wood] [0.9, 1.0, light_wood, dark_wood]) } } define glass texture { surface { ambient 0 diffuse 0 specular 0.2 reflection white, 0.1 transmission white, 1, 1.5 }} define shiny surface { ambient 0.1 diffuse 0.6 specular white, 0.6 microfacet Phong 7  } define steely_blue texture { shiny { color black } } define chrome texture { surface { color white ambient 0.0 diffuse 0.2 specular 0.4 microfacet Phong 10 reflection 0.8 } }   viewpoint {     from <4.000, -1.000, 1.000> at <0.000, 0.000, 0.000> up <0, 1, 0> angle 60     resolution 640, 480 aspect 1.6 image_format 0 }       light <-10, 30, 20> light <-10, 30, -20>   object { disc <0, -2, 0>, <0, 1, 0>, 30 wooden }   object { sphere <0.000, 0.000, 0.000>, 1.00 chrome } object { cylinder <0.000, 0.000, 0.000>, <0.000, 0.000, -4.000>, 0.50 chrome }   After setting up the background and defining colors and textures, the viewpoint is specified. The “camera” is located at a point in 3D space, and it looks towards another point. The angle, image resolution, and aspect ratio are specified. Two lights are present in the image at defined coordinates. The three objects in the image are a wooden disc to represent a table top, and a sphere and cylinder that intersect to form a pin that will be used for the pin board toy in the final animation. When the image is rendered, the following image is produced. The pins are modeled with a chrome surface, so they reflect the environment around them. Note that the scale of the pin shaft is not correct, this will be fixed later. Modeling the Pin Board The frame of the pin-board is made up of three boxes, and six cylinders, the front box is modeled using a clear, slightly reflective solid, with the same refractive index of glass. The other shapes are modeled as metal. object { box <-5.5, -1.5, 1>, <5.5, 5.5, 1.2> glass } object { box <-5.5, -1.5, -0.04>, <5.5, 5.5, -0.09> steely_blue } object { box <-5.5, -1.5, -0.52>, <5.5, 5.5, -0.59> steely_blue } object { cylinder <-5.2, -1.2, 1.4>, <-5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, -1.2, 1.4>, <5.2, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <-5.2, 5.2, 1.4>, <-5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <5.2, 5.2, 1.4>, <5.2, 5.2, -0.74>, 0.2 steely_blue } object { cylinder <0, -1.2, 1.4>, <0, -1.2, -0.74>, 0.2 steely_blue } object { cylinder <0, 5.2, 1.4>, <0, 5.2, -0.74>, 0.2 steely_blue }   In order to create the matrix of pins that make up the pin board I used a basic console application with a few nested loops to create two intersecting matrixes of pins, which models the layout used in the pin boards. The resulting image is shown below. The pin board contains 11,481 pins, with the scene file containing 23,709 lines of code. For the complete animation 2,000 scene files will be created, which is over 47 million lines of code. Each pin in the pin-board will slide out a specific distance when an object is pressed into the back of the board. This is easily modeled by setting the Z coordinate of the pin to a specific value. In order to set all of the pins in the pin-board to the correct position, a bitmap image can be used. The position of the pin can be set based on the color of the pixel at the appropriate position in the image. When the Windows Azure logo is used to set the Z coordinate of the pins, the following image is generated. The challenge now was to make a cool animation. The Azure Logo is fine, but it is static. Using a normal video to animate the pins would not work; the colors in the video would not be the same as the depth of the objects from the camera. In order to simulate the pin board accurately a series of frames from a depth camera could be used. Windows Kinect The Kenect controllers for the X-Box 360 and Windows feature a depth camera. The Kinect SDK for Windows provides a programming interface for Kenect, providing easy access for .NET developers to the Kinect sensors. The Kinect Explorer provided with the Kinect SDK is a great starting point for exploring Kinect from a developers perspective. Both the X-Box 360 Kinect and the Windows Kinect will work with the Kinect SDK, the Windows Kinect is required for commercial applications, but the X-Box Kinect can be used for hobby projects. The Windows Kinect has the advantage of providing a mode to allow depth capture with objects closer to the camera, which makes for a more accurate depth image for setting the pin positions. Creating a Depth Field Animation The depth field animation used to set the positions of the pin in the pin board was created using a modified version of the Kinect Explorer sample application. In order to simulate the pin board accurately, a small section of the depth range from the depth sensor will be used. Any part of the object in front of the depth range will result in a white pixel; anything behind the depth range will be black. Within the depth range the pixels in the image will be set to RGB values from 0,0,0 to 255,255,255. A screen shot of the modified Kinect Explorer application is shown below. The Kinect Explorer sample application was modified to include slider controls that are used to set the depth range that forms the image from the depth stream. This allows the fine tuning of the depth image that is required for simulating the position of the pins in the pin board. The Kinect Explorer was also modified to record a series of images from the depth camera and save them as a sequence JPEG files that will be used to animate the pins in the animation the Start and Stop buttons are used to start and stop the image recording. En example of one of the depth images is shown below. Once a series of 2,000 depth images has been captured, the task of creating the animation can begin. Rendering a Test Frame In order to test the creation of frames and get an approximation of the time required to render each frame a test frame was rendered on-premise using PolyRay. The output of the rendering process is shown below. The test frame contained 23,629 primitive shapes, most of which are the spheres and cylinders that are used for the 11,800 or so pins in the pin board. The 1280x720 image contains 921,600 pixels, but as anti-aliasing was used the number of rays that were calculated was 4,235,777, with 3,478,754,073 object boundaries checked. The test frame of the pin board with the depth field image applied is shown below. The tracing time for the test frame was 4 minutes 27 seconds, which means rendering the2,000 frames in the animation would take over 148 hours, or a little over 6 days. Although this is much faster that an old 486, waiting almost a week to see the results of an animation would make it challenging for animators to create, view, and refine their animations. It would be much better if the animation could be rendered in less than one hour. Windows Azure Worker Roles The cost of creating an on-premise render farm to render animations increases in proportion to the number of servers. The table below shows the cost of servers for creating a render farm, assuming a cost of $500 per server. Number of Servers Cost 1 $500 16 $8,000 256 $128,000   As well as the cost of the servers, there would be additional costs for networking, racks etc. Hosting an environment of 256 servers on-premise would require a server room with cooling, and some pretty hefty power cabling. The Windows Azure compute services provide worker roles, which are ideal for performing processor intensive compute tasks. With the scalability available in Windows Azure a job that takes 256 hours to complete could be perfumed using different numbers of worker roles. The time and cost of using 1, 16 or 256 worker roles is shown below. Number of Worker Roles Render Time Cost 1 256 hours $30.72 16 16 hours $30.72 256 1 hour $30.72   Using worker roles in Windows Azure provides the same cost for the 256 hour job, irrespective of the number of worker roles used. Provided the compute task can be broken down into many small units, and the worker role compute power can be used effectively, it makes sense to scale the application so that the task is completed quickly, making the results available in a timely fashion. The task of rendering 2,000 frames in an animation is one that can easily be broken down into 2,000 individual pieces, which can be performed by a number of worker roles. Creating a Render Farm in Windows Azure The architecture of the render farm is shown in the following diagram. The render farm is a hybrid application with the following components: ·         On-Premise o   Windows Kinect – Used combined with the Kinect Explorer to create a stream of depth images. o   Animation Creator – This application uses the depth images from the Kinect sensor to create scene description files for PolyRay. These files are then uploaded to the jobs blob container, and job messages added to the jobs queue. o   Process Monitor – This application queries the role instance lifecycle table and displays statistics about the render farm environment and render process. o   Image Downloader – This application polls the image queue and downloads the rendered animation files once they are complete. ·         Windows Azure o   Azure Storage – Queues and blobs are used for the scene description files and completed frames. A table is used to store the statistics about the rendering environment.   The architecture of each worker role is shown below.   The worker role is configured to use local storage, which provides file storage on the worker role instance that can be use by the applications to render the image and transform the format of the image. The service definition for the worker role with the local storage configuration highlighted is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceDefinition name="CloudRay" >   <WorkerRole name="CloudRayWorkerRole" vmsize="Small">     <Imports>     </Imports>     <ConfigurationSettings>       <Setting name="DataConnectionString" />     </ConfigurationSettings>     <LocalResources>       <LocalStorage name="RayFolder" cleanOnRoleRecycle="true" />     </LocalResources>   </WorkerRole> </ServiceDefinition>     The two executable programs, PolyRay.exe and DTA.exe are included in the Azure project, with Copy Always set as the property. PolyRay will take the scene description file and render it to a Truevision TGA file. As the TGA format has not seen much use since the mid 90’s it is converted to a JPG image using Dave's Targa Animator, another shareware application from the 90’s. Each worker roll will use the following process to render the animation frames. 1.       The worker process polls the job queue, if a job is available the scene description file is downloaded from blob storage to local storage. 2.       PolyRay.exe is started in a process with the appropriate command line arguments to render the image as a TGA file. 3.       DTA.exe is started in a process with the appropriate command line arguments convert the TGA file to a JPG file. 4.       The JPG file is uploaded from local storage to the images blob container. 5.       A message is placed on the images queue to indicate a new image is available for download. 6.       The job message is deleted from the job queue. 7.       The role instance lifecycle table is updated with statistics on the number of frames rendered by the worker role instance, and the CPU time used. The code for this is shown below. public override void Run() {     // Set environment variables     string polyRayPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), PolyRayLocation);     string dtaPath = Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), DTALocation);       LocalResource rayStorage = RoleEnvironment.GetLocalResource("RayFolder");     string localStorageRootPath = rayStorage.RootPath;       JobQueue jobQueue = new JobQueue("renderjobs");     JobQueue downloadQueue = new JobQueue("renderimagedownloadjobs");     CloudRayBlob sceneBlob = new CloudRayBlob("scenes");     CloudRayBlob imageBlob = new CloudRayBlob("images");     RoleLifecycleDataSource roleLifecycleDataSource = new RoleLifecycleDataSource();       Frames = 0;       while (true)     {         // Get the render job from the queue         CloudQueueMessage jobMsg = jobQueue.Get();           if (jobMsg != null)         {             // Get the file details             string sceneFile = jobMsg.AsString;             string tgaFile = sceneFile.Replace(".pi", ".tga");             string jpgFile = sceneFile.Replace(".pi", ".jpg");               string sceneFilePath = Path.Combine(localStorageRootPath, sceneFile);             string tgaFilePath = Path.Combine(localStorageRootPath, tgaFile);             string jpgFilePath = Path.Combine(localStorageRootPath, jpgFile);               // Copy the scene file to local storage             sceneBlob.DownloadFile(sceneFilePath);               // Run the ray tracer.             string polyrayArguments =                 string.Format("\"{0}\" -o \"{1}\" -a 2", sceneFilePath, tgaFilePath);             Process polyRayProcess = new Process();             polyRayProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), polyRayPath);             polyRayProcess.StartInfo.Arguments = polyrayArguments;             polyRayProcess.Start();             polyRayProcess.WaitForExit();               // Convert the image             string dtaArguments =                 string.Format(" {0} /FJ /P{1}", tgaFilePath, Path.GetDirectoryName (jpgFilePath));             Process dtaProcess = new Process();             dtaProcess.StartInfo.FileName =                 Path.Combine(Environment.GetEnvironmentVariable("RoleRoot"), dtaPath);             dtaProcess.StartInfo.Arguments = dtaArguments;             dtaProcess.Start();             dtaProcess.WaitForExit();               // Upload the image to blob storage             imageBlob.UploadFile(jpgFilePath);               // Add a download job.             downloadQueue.Add(jpgFile);               // Delete the render job message             jobQueue.Delete(jobMsg);               Frames++;         }         else         {             Thread.Sleep(1000);         }           // Log the worker role activity.         roleLifecycleDataSource.Alive             ("CloudRayWorker", RoleLifecycleDataSource.RoleLifecycleId, Frames);     } }     Monitoring Worker Role Instance Lifecycle In order to get more accurate statistics about the lifecycle of the worker role instances used to render the animation data was tracked in an Azure storage table. The following class was used to track the worker role lifecycles in Azure storage.   public class RoleLifecycle : TableServiceEntity {     public string ServerName { get; set; }     public string Status { get; set; }     public DateTime StartTime { get; set; }     public DateTime EndTime { get; set; }     public long SecondsRunning { get; set; }     public DateTime LastActiveTime { get; set; }     public int Frames { get; set; }     public string Comment { get; set; }       public RoleLifecycle()     {     }       public RoleLifecycle(string roleName)     {         PartitionKey = roleName;         RowKey = Utils.GetAscendingRowKey();         Status = "Started";         StartTime = DateTime.UtcNow;         LastActiveTime = StartTime;         EndTime = StartTime;         SecondsRunning = 0;         Frames = 0;     } }     A new instance of this class is created and added to the storage table when the role starts. It is then updated each time the worker renders a frame to record the total number of frames rendered and the total processing time. These statistics are used be the monitoring application to determine the effectiveness of use of resources in the render farm. Rendering the Animation The Azure solution was deployed to Windows Azure with the service configuration set to 16 worker role instances. This allows for the application to be tested in the cloud environment, and the performance of the application determined. When I demo the application at conferences and user groups I often start with 16 instances, and then scale up the application to the full 256 instances. The configuration to run 16 instances is shown below. <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="16" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     About six minutes after deploying the application the first worker roles become active and start to render the first frames of the animation. The CloudRay Monitor application displays an icon for each worker role instance, with a number indicating the number of frames that the worker role has rendered. The statistics on the left show the number of active worker roles and statistics about the render process. The render time is the time since the first worker role became active; the CPU time is the total amount of processing time used by all worker role instances to render the frames.   Five minutes after the first worker role became active the last of the 16 worker roles activated. By this time the first seven worker roles had each rendered one frame of the animation.   With 16 worker roles u and running it can be seen that one hour and 45 minutes CPU time has been used to render 32 frames with a render time of just under 10 minutes.     At this rate it would take over 10 hours to render the 2,000 frames of the full animation. In order to complete the animation in under an hour more processing power will be required. Scaling the render farm from 16 instances to 256 instances is easy using the new management portal. The slider is set to 256 instances, and the configuration saved. We do not need to re-deploy the application, and the 16 instances that are up and running will not be affected. Alternatively, the configuration file for the Azure service could be modified to specify 256 instances.   <?xml version="1.0" encoding="utf-8"?> <ServiceConfiguration serviceName="CloudRay" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="1" osVersion="*">   <Role name="CloudRayWorkerRole">     <Instances count="256" />     <ConfigurationSettings>       <Setting name="DataConnectionString"         value="DefaultEndpointsProtocol=https;AccountName=cloudraydata;AccountKey=..." />     </ConfigurationSettings>   </Role> </ServiceConfiguration>     Six minutes after the new configuration has been applied 75 new worker roles have activated and are processing their first frames.   Five minutes later the full configuration of 256 worker roles is up and running. We can see that the average rate of frame rendering has increased from 3 to 12 frames per minute, and that over 17 hours of CPU time has been utilized in 23 minutes. In this test the time to provision 140 worker roles was about 11 minutes, which works out at about one every five seconds.   We are now half way through the rendering, with 1,000 frames complete. This has utilized just under three days of CPU time in a little over 35 minutes.   The animation is now complete, with 2,000 frames rendered in a little over 52 minutes. The CPU time used by the 256 worker roles is 6 days, 7 hours and 22 minutes with an average frame rate of 38 frames per minute. The rendering of the last 1,000 frames took 16 minutes 27 seconds, which works out at a rendering rate of 60 frames per minute. The frame counts in the server instances indicate that the use of a queue to distribute the workload has been very effective in distributing the load across the 256 worker role instances. The first 16 instances that were deployed first have rendered between 11 and 13 frames each, whilst the 240 instances that were added when the application was scaled have rendered between 6 and 9 frames each.   Completed Animation I’ve uploaded the completed animation to YouTube, a low resolution preview is shown below. Pin Board Animation Created using Windows Kinect and 256 Windows Azure Worker Roles   The animation can be viewed in 1280x720 resolution at the following link: http://www.youtube.com/watch?v=n5jy6bvSxWc Effective Use of Resources According to the CloudRay monitor statistics the animation took 6 days, 7 hours and 22 minutes CPU to render, this works out at 152 hours of compute time, rounded up to the nearest hour. As the usage for the worker role instances are billed for the full hour, it may have been possible to render the animation using fewer than 256 worker roles. When deciding the optimal usage of resources, the time required to provision and start the worker roles must also be considered. In the demo I started with 16 worker roles, and then scaled the application to 256 worker roles. It would have been more optimal to start the application with maybe 200 worker roles, and utilized the full hour that I was being billed for. This would, however, have prevented showing the ease of scalability of the application. The new management portal displays the CPU usage across the worker roles in the deployment. The average CPU usage across all instances is 93.27%, with over 99% used when all the instances are up and running. This shows that the worker role resources are being used very effectively. Grid Computing Scenarios Although I am using this scenario for a hobby project, there are many scenarios where a large amount of compute power is required for a short period of time. Windows Azure provides a great platform for developing these types of grid computing applications, and can work out very cost effective. ·         Windows Azure can provide massive compute power, on demand, in a matter of minutes. ·         The use of queues to manage the load balancing of jobs between role instances is a simple and effective solution. ·         Using a cloud-computing platform like Windows Azure allows proof-of-concept scenarios to be tested and evaluated on a very low budget. ·         No charges for inbound data transfer makes the uploading of large data sets to Windows Azure Storage services cost effective. (Transaction charges still apply.) Tips for using Windows Azure for Grid Computing Scenarios I found the implementation of a render farm using Windows Azure a fairly simple scenario to implement. I was impressed by ease of scalability that Azure provides, and by the short time that the application took to scale from 16 to 256 worker role instances. In this case it was around 13 minutes, in other tests it took between 10 and 20 minutes. The following tips may be useful when implementing a grid computing project in Windows Azure. ·         Using an Azure Storage queue to load-balance the units of work across multiple worker roles is simple and very effective. The design I have used in this scenario could easily scale to many thousands of worker role instances. ·         Windows Azure accounts are typically limited to 20 cores. If you need to use more than this, a call to support and a credit card check will be required. ·         Be aware of how the billing model works. You will be charged for worker role instances for the full clock our in which the instance is deployed. Schedule the workload to start just after the clock hour has started. ·         Monitor the utilization of the resources you are provisioning, ensure that you are not paying for worker roles that are idle. ·         If you are deploying third party applications to worker roles, you may well run into licensing issues. Purchasing software licenses on a per-processor basis when using hundreds of processors for a short time period would not be cost effective. ·         Third party software may also require installation onto the worker roles, which can be accomplished using start-up tasks. Bear in mind that adding a startup task and possible re-boot will add to the time required for the worker role instance to start and activate. An alternative may be to use a prepared VM and use VM roles. ·         Consider using the Windows Azure Autoscaling Application Block (WASABi) to autoscale the worker roles in your application. When using a large number of worker roles, the utilization must be carefully monitored, if the scaling algorithms are not optimal it could get very expensive!

    Read the article

  • MySQL is hogging my server resources

    - by Reacen
    Does anyone have any idea of what can cause this weird behaviour and how I go about fixing it? This is all coming from MySQL only (both RAM and CPU usage), for about 10 minutes after I reboot my Java game server (that has a pool of 256 connections). There are not that many queries and I think it may be more of a MySQL misconfiguration problem. My server: 3.20 GHz * 6 core / 24 GB RAM / 64 bit Windows Server 2003. My game server: Java server, with 256 MySQL connections pool (MyISAM engine), about 500,000 accounts, and 9 million rows of game items in database and about 3,000 players are connected. After about 15 minutes of the game server reboot, the server resumes its stability and CPU usage drop down to 1% ~ 5% and memory to 6 GB. Here is a copy of my MySQL configuration. Also, any advice about my MySQL configuration will be appreciated. I really set it up almost at random. # Example MySQL config file for very large systems. # # This is for a large system with memory of 1G-2G where the system runs mainly # MySQL. # # You can copy this file to # /etc/my.cnf to set global options, # mysql-data-dir/my.cnf to set server-specific options (in this # installation this directory is C:\mysql\data) or # ~/.my.cnf to set user-specific options. # # In this file, you can use all long options that a program supports. # If you want to know which options a program supports, run the program # with the "--help" option. # The following options will be passed to all MySQL clients [client] #password = your_password port = 3306 socket = /tmp/mysql.sock # Here follows entries for some specific programs # The MySQL server [mysqld] #log=c:\mysql.log port = 3306 socket = /tmp/mysql.sock skip-locking key_buffer_size = 2572M max_allowed_packet = 64M table_open_cache = 512 sort_buffer_size = 128M read_buffer_size = 128M read_rnd_buffer_size = 128M myisam_sort_buffer_size = 500M thread_cache_size = 32 query_cache_size = 1948M # Try number of CPU's*2 for thread_concurrency thread_concurrency = 12 max_connections = 5000 # Don't listen on a TCP/IP port at all. This can be a security enhancement, # if all processes that need to connect to mysqld run on the same host. # All interaction with mysqld must be made via Unix sockets or named pipes. # Note that using this option without enabling named pipes on Windows # (via the "enable-named-pipe" option) will render mysqld useless! # #skip-networking # Replication Master Server (default) # binary logging is required for replication log-bin=mysql-bin # required unique id between 1 and 2^32 - 1 # defaults to 1 if master-host is not set # but will not function as a master if omitted server-id = 1 # Replication Slave (comment out master section to use this) # # To configure this host as a replication slave, you can choose between # two methods : # # 1) Use the CHANGE MASTER TO command (fully described in our manual) - # the syntax is: # # CHANGE MASTER TO MASTER_HOST=<host>, MASTER_PORT=<port>, # MASTER_USER=<user>, MASTER_PASSWORD=<password> ; # # where you replace <host>, <user>, <password> by quoted strings and # <port> by the master's port number (3306 by default). # # Example: # # CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306, # MASTER_USER='joe', MASTER_PASSWORD='secret'; # # OR # # 2) Set the variables below. However, in case you choose this method, then # start replication for the first time (even unsuccessfully, for example # if you mistyped the password in master-password and the slave fails to # connect), the slave will create a master.info file, and any later # change in this file to the variables' values below will be ignored and # overridden by the content of the master.info file, unless you shutdown # the slave server, delete master.info and restart the slaver server. # For that reason, you may want to leave the lines below untouched # (commented) and instead use CHANGE MASTER TO (see above) # # required unique id between 2 and 2^32 - 1 # (and different from the master) # defaults to 2 if master-host is set # but will not function as a slave if omitted #server-id = 2 # # The replication master for this slave - required #master-host = <hostname> # # The username the slave will use for authentication when connecting # to the master - required #master-user = <username> # # The password the slave will authenticate with when connecting to # the master - required #master-password = <password> # # The port the master is listening on. # optional - defaults to 3306 #master-port = <port> # # binary logging - not required for slaves, but recommended #log-bin=mysql-bin # # binary logging format - mixed recommended #binlog_format=mixed # Point the following paths to different dedicated disks #tmpdir = /tmp/ #log-update = /path-to-dedicated-directory/hostname # Uncomment the following if you are using InnoDB tables #innodb_data_home_dir = C:\mysql\data/ #innodb_data_file_path = ibdata1:2000M;ibdata2:10M:autoextend #innodb_log_group_home_dir = C:\mysql\data/ # You can set .._buffer_pool_size up to 50 - 80 % # of RAM but beware of setting memory usage too high #innodb_buffer_pool_size = 384M #innodb_additional_mem_pool_size = 20M # Set .._log_file_size to 25 % of buffer pool size #innodb_log_file_size = 100M #innodb_log_buffer_size = 8M #innodb_flush_log_at_trx_commit = 1 #innodb_lock_wait_timeout = 50 [mysqldump] quick max_allowed_packet = 64M [mysql] no-auto-rehash # Remove the next comment character if you are not familiar with SQL #safe-updates [myisamchk] key_buffer_size = 256M sort_buffer_size = 256M read_buffer = 8M write_buffer = 8M [mysqlhotcopy] interactive-timeout

    Read the article

  • Does boost::asio makes excessive small heap allocations or am I wrong?

    - by Poni
    #include <cstdlib> #include <iostream> #include <boost/bind.hpp> #include <boost/asio.hpp> using boost::asio::ip::tcp; class session { public: session(boost::asio::io_service& io_service) : socket_(io_service) { } tcp::socket& socket() { return socket_; } void start() { socket_.async_read_some(boost::asio::buffer(data_, max_length - 1), boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } void handle_read(const boost::system::error_code& error, size_t bytes_transferred) { if (!error) { data_[bytes_transferred] = '\0'; if(NULL != strstr(data_, "quit")) { this->socket().shutdown(boost::asio::ip::tcp::socket::shutdown_both); this->socket().close(); // how to make this dispatch "handle_read()" with a "disconnected" flag? } else { boost::asio::async_write(socket_, boost::asio::buffer(data_, bytes_transferred), boost::bind(&session::handle_write, this, boost::asio::placeholders::error)); socket_.async_read_some(boost::asio::buffer(data_, max_length - 1), boost::bind(&session::handle_read, this, boost::asio::placeholders::error, boost::asio::placeholders::bytes_transferred)); } } else { delete this; } } void handle_write(const boost::system::error_code& error) { if (!error) { // } else { delete this; } } private: tcp::socket socket_; enum { max_length = 1024 }; char data_[max_length]; }; class server { public: server(boost::asio::io_service& io_service, short port) : io_service_(io_service), acceptor_(io_service, tcp::endpoint(tcp::v4(), port)) { session* new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } void handle_accept(session* new_session, const boost::system::error_code& error) { if (!error) { new_session->start(); new_session = new session(io_service_); acceptor_.async_accept(new_session->socket(), boost::bind(&server::handle_accept, this, new_session, boost::asio::placeholders::error)); } else { delete new_session; } } private: boost::asio::io_service& io_service_; tcp::acceptor acceptor_; }; int main(int argc, char* argv[]) { try { if (argc != 2) { std::cerr << "Usage: async_tcp_echo_server <port>\n"; return 1; } boost::asio::io_service io_service; using namespace std; // For atoi. server s(io_service, atoi(argv[1])); io_service.run(); } catch (std::exception& e) { std::cerr << "Exception: " << e.what() << "\n"; } return 0; } While experimenting with boost::asio I've noticed that within the calls to async_write()/async_read_some() there is a usage of the C++ "new" keyword. Also, when stressing this echo server with a client (1 connection) that sends for example 100,000 times some data, the memory usage of this program is getting higher and higher. What's going on? Will it allocate memory for every call? Or am I wrong? Asking because it doesn't seem right that a server app will allocate, anything. Can I handle it, say with a memory pool? Another side-question: See the "this-socket().close();" ? I want it, as the comment right to it says, to dispatch that same function one last time, with a disconnection error. Need that to do some clean-up. How do I do that? Thank you all gurus (:

    Read the article

  • How to correctly use DERIVE or COUNTER in munin plugins

    - by Johan
    I'm using munin to monitor my server. I've been able to write plugins for it, but only if the graph type is GAUGE. When I try COUNTER or DERIVE, no data is logged or graphed. The plugin i'm currently stuck on is for monitoring bandwidth usage, and is as follows: /etc/munin/plugins/bandwidth2 #!/bin/sh if [ "$1" = "config" ]; then echo 'graph_title Bandwidth Usage 2' echo 'graph_vlabel Bandwidth' echo 'graph_scale no' echo 'graph_category network' echo 'graph_info Bandwidth usage.' echo 'used.label Used' echo 'used.info Bandwidth used so far this month.' echo 'used.type DERIVE' echo 'used.min 0' echo 'remain.label Remaining' echo 'remain.info Bandwidth remaining this month.' echo 'remain.type DERIVE' echo 'remain.min 0' exit 0 fi cat /var/log/zen.log The contents of /var/log/zen.log are: used.value 61.3251953125 remain.value 20.0146484375 And the resulting database is: <!-- Round Robin Database Dump --><rrd> <version> 0003 </version> <step> 300 </step> <!-- Seconds --> <lastupdate> 1269936605 </lastupdate> <!-- 2010-03-30 09:10:05 BST --> <ds> <name> 42 </name> <type> DERIVE </type> <minimal_heartbeat> 600 </minimal_heartbeat> <min> 0.0000000000e+00 </min> <max> NaN </max> <!-- PDP Status --> <last_ds> 61.3251953125 </last_ds> <value> NaN </value> <unknown_sec> 5 </unknown_sec> </ds> <!-- Round Robin Archives --> <rra> <cf> AVERAGE </cf> <pdp_per_row> 1 </pdp_per_row> <!-- 300 seconds --> <params> <xff> 5.0000000000e-01 </xff> </params> <cdp_prep> <ds> <primary_value> NaN </primary_value> <secondary_value> NaN </secondary_value> <value> NaN </value> <unknown_datapoints> 0 </unknown_datapoints> </ds> </cdp_prep> <database> <!-- 2010-03-28 09:15:00 BST / 1269764100 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:20:00 BST / 1269764400 --> <row><v> NaN </v></row> <!-- 2010-03-28 09:25:00 BST / 1269764700 --> <row><v> NaN </v></row> <snip> The value for last_ds is correct, it just doesn't seem to make it into the actual database. If I change DERIVE to GAUGE, it works as expected. munin-run bandwidth2 outputs the contents of /var/log/zen.log I've been all over the (sparse) docs for munin plugins, and can't find my mistake. Modifying an existing plugin didn't work for me either.

    Read the article

  • Optimizing MySQL for small VPS

    - by Chris M
    I'm trying to optimize my MySQL config for a verrry small VPS. The VPS is also running NGINX/PHP-FPM and Magento; all with a limit of 250MB of RAM. This is an output of MySQL Tuner... -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.41-3ubuntu12.8 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: -Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 1M (Tables: 14) [--] Data in InnoDB tables: 29M (Tables: 301) [--] Data in MEMORY tables: 1M (Tables: 17) [!!] Total fragmented tables: 301 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 2d 11h 14m 58s (1M q [8.038 qps], 33K conn, TX: 2B, RX: 618M) [--] Reads / Writes: 83% / 17% [--] Total buffers: 122.0M global + 8.6M per thread (100 max threads) [!!] Maximum possible memory usage: 978.2M (404% of installed RAM) [OK] Slow queries: 0% (37/1M) [OK] Highest usage of available connections: 6% (6/100) [OK] Key buffer size / total MyISAM indexes: 32.0M/282.0K [OK] Key buffer hit rate: 99.7% (358K cached / 1K reads) [OK] Query cache efficiency: 83.4% (1M cached / 1M selects) [!!] Query cache prunes per day: 48301 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 144K sorts) [OK] Temporary tables created on disk: 13% (27K on disk / 203K total) [OK] Thread cache hit rate: 99% (6 created / 33K connections) [!!] Table cache hit rate: 0% (32 open / 51K opened) [OK] Open file limit used: 1% (20/1K) [OK] Table locks acquired immediately: 99% (1M immediate / 1M locks) [!!] InnoDB data size / buffer pool: 29.2M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Reduce your overall MySQL memory footprint for system stability Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: *** MySQL's maximum memory usage is dangerously high *** *** Add RAM before increasing MySQL buffer variables *** query_cache_size (> 64M) table_cache (> 32) innodb_buffer_pool_size (>= 29M) and this is the config. # # The MySQL database server configuration file. # # You can copy this to one of: # - "/etc/mysql/my.cnf" to set global options, # - "~/.my.cnf" to set user-specific options. # # One can use all long options that the program supports. # Run program with --help to get a list of available options and with # --print-defaults to see which it would actually understand and use. # # For explanations see # http://dev.mysql.com/doc/mysql/en/server-system-variables.html # This will be passed to all mysql clients # It has been reported that passwords should be enclosed with ticks/quotes # escpecially if they contain "#" chars... # Remember to edit /etc/mysql/debian.cnf when changing the socket location. [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # key_buffer = 32M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 sort_buffer_size = 4M read_buffer_size = 4M myisam_sort_buffer_size = 16M # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP max_connections = 100 table_cache = 32 tmp_table_size = 128M #thread_concurrency = 10 # # * Query Cache Configuration # #query_cache_limit = 1M query_cache_type = 1 query_cache_size = 64M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ The site contains 1 wordpress site,so lots of MYISAM but mostly static content as its not changing all that often (A wordpress cache plugin deals with this). And the Magento Site which consists of a lot of InnoDB tables, some MyISAM and some INMEMORY. The "read" side seems to be running pretty well with a mass of optimizations I've used on Magento, the NGINX setup and PHP-FPM + XCACHE. I'd love to have a kick in the right direction with the MySQL config so I'm not blindly altering it based on the MySQLTuner without understanding what I'm changing. Thanks

    Read the article

  • Qt Linker Errors

    - by Kyle Rozendo
    Hi All, I've been trying to get Qt working (QCreator, QIde and now VS2008). I have sorted out a ton of issues already, but I am now faced with the following build errors, and frankly I'm out of ideas. Error 1 error LNK2019: unresolved external symbol "public: void __thiscall FileVisitor::processFileList(class QStringList)" (?processFileList@FileVisitor@@QAEXVQStringList@@@Z) referenced in function _main codevisitor-test.obj Question1 Error 2 error LNK2019: unresolved external symbol "public: void __thiscall FileVisitor::processEntry(class QString)" (?processEntry@FileVisitor@@QAEXVQString@@@Z) referenced in function _main codevisitor-test.obj Question1 Error 3 error LNK2019: unresolved external symbol "public: class QString __thiscall ArgumentList::getSwitchArg(class QString,class QString)" (?getSwitchArg@ArgumentList@@QAE?AVQString@@V2@0@Z) referenced in function _main codevisitor-test.obj Question1 Error 4 error LNK2019: unresolved external symbol "public: bool __thiscall ArgumentList::getSwitch(class QString)" (?getSwitch@ArgumentList@@QAE_NVQString@@@Z) referenced in function _main codevisitor-test.obj Question1 Error 5 error LNK2019: unresolved external symbol "public: void __thiscall ArgumentList::argsToStringlist(int,char * * const)" (?argsToStringlist@ArgumentList@@QAEXHQAPAD@Z) referenced in function "public: __thiscall ArgumentList::ArgumentList(int,char * * const)" (??0ArgumentList@@QAE@HQAPAD@Z) codevisitor-test.obj Question1 Error 6 error LNK2019: unresolved external symbol "public: __thiscall FileVisitor::FileVisitor(class QString,bool,bool)" (??0FileVisitor@@QAE@VQString@@_N1@Z) referenced in function "public: __thiscall CodeVisitor::CodeVisitor(class QString,bool)" (??0CodeVisitor@@QAE@VQString@@_N@Z) codevisitor-test.obj Question1 Error 7 error LNK2001: unresolved external symbol "public: virtual struct QMetaObject const * __thiscall FileVisitor::metaObject(void)const " (?metaObject@FileVisitor@@UBEPBUQMetaObject@@XZ) codevisitor-test.obj Question1 Error 8 error LNK2001: unresolved external symbol "public: virtual void * __thiscall FileVisitor::qt_metacast(char const *)" (?qt_metacast@FileVisitor@@UAEPAXPBD@Z) codevisitor-test.obj Question1 Error 9 error LNK2001: unresolved external symbol "public: virtual int __thiscall FileVisitor::qt_metacall(enum QMetaObject::Call,int,void * *)" (?qt_metacall@FileVisitor@@UAEHW4Call@QMetaObject@@HPAPAX@Z) codevisitor-test.obj Question1 Error 10 error LNK2001: unresolved external symbol "protected: virtual bool __thiscall FileVisitor::skipDir(class QDir const &)" (?skipDir@FileVisitor@@MAE_NABVQDir@@@Z) codevisitor-test.obj Question1 Error 11 fatal error LNK1120: 10 unresolved externals ... \Visual Studio 2008\Projects\Assignment1\Question1\Question1\Debug\Question1.exe Question1 The code is as follows: #include "argumentlist.h" #include <codevisitor.h> #include <QDebug> void usage(QString appname) { qDebug() << appname << " Usage: \n" << "codevisitor [-r] [-d startdir] [-f filter] [file-list]\n" << "\t-r \tvisitor will recurse into subdirs\n" << "\t-d startdir\tspecifies starting directory\n" << "\t-f filter\tfilename filter to restrict visits\n" << "\toptional list of files to be visited"; } int main(int argc, char** argv) { ArgumentList al(argc, argv); QString appname = al.takeFirst(); /* app name is always first in the list. */ if (al.count() == 0) { usage(appname); exit(1); } bool recursive(al.getSwitch("-r")); QString startdir(al.getSwitchArg("-d")); QString filter(al.getSwitchArg("-f")); CodeVisitor cvis(filter, recursive); if (startdir != QString()) { cvis.processEntry(startdir); } else if (al.size()) { cvis.processFileList(al); } else return 1; qDebug() << "Files Processed: %d" << cvis.getNumFiles(); qDebug() << cvis.getResultString(); return 0; } Thanks in advance, I'm simply stumped.

    Read the article

  • Can someone explain the "use-cases" for the default munin graphs?

    - by exhuma
    When installing munin, it activates a default set of plugins (at least on ubuntu). Alternatively, you can simply run munin-node-configure to figure out which plugins are supported on your system. Most of these plugins plot straight-forward data. My question is not to explain the nature of the data (well... maybe for some) but what is it that you look for in these graphs? It is easy to install munin and see fancy graphs. But having the graphs and not being able to "read" them renders them totally useless. I am going to list standard plugins which are enabled by default on my system. So it's going to be a long list. For completeness, I am also going to list plugins which I think to understand and give a short explanation as to what I think it's used for. Pleas correct if I am wrong with any of them. So let me split this questions in three parts: Plugins where I don't even understand the data Plugins where I understand the data but don't know what I should look out for Plugins which I think to understand Plugins where I don't even understand the data These may contain questions that are not necessarily aimed at munin alone. Not understanding the data usually mean a gap in fundamental knowledge on operating systems/hardware.... ;) Feel free to respond with a "giyf" answer. These are plugins where I can only guess what's going on... I hardly want to look at these "guessing"... Disk IOs per device (IOs/second)What's an IO. I know it stands for input/output. But that's as far as it goes. Disk latency per device (Average IO wait)Not a clue what an "IO wait" is... IO Service TimeThis one is a huge mess, and it's near impossible to see something in the graph at all. Plugins where I understand the data but don't know what I should look out for IOStat (blocks/second read/written)I assume, the thing to look out for in here are spikes? Which would mean that the device is in heavy use? Available entropy (bytes)I assume that this is important for random number generation? Why would I graph this? So far the value has always been near constant. VMStat (running/I/O sleep processes)What's the difference between this one and the "processes" graph? Both show running/sleeping processes, whereas the "Processes" graph seems to have more details. Disk throughput per device (bytes/second read/written) What's thedifference between this one and the "IOStat" graph? inode table usageWhat should I look for in this graph? Plugins which I think to understand I'll be guessing some things here... correct me if I am wrong. Disk usage in percent (percent)How much disk space is used/remaining. As this is approaching 100%, you should consider cleaning up or extend the partition. This is extremely important for the root partition. Firewall Throughput (packets/second)The number of packets passing through the firewall. If this is spiking for a longer period of time, it could be a sign of a DOS attack (or we are simply recieving a large file). It can also give you an idea about your firewall performance. If it's levelling out and you need more "power" you should consider load balancing. If it's levelling out and see a correlation with your CPU load, it could also mean that your hardware is not fast enough. Correlations with disk usage could point to excessive LOG targets in you FW config. eth0 errors (packets in/out)Network errors. If this value is increasing, it could be a sign of faulty hardware. eth0 traffic (bits/second in/out)Raw network traffic. This should correlate with Firewall throughput. number of threadsAn ever-increasing value might point to a process not properly closing threads. Investigate! processesBreakdown of active processes (including sleeping). A quick spike in here might point to a fork-bomb. A slowly, but ever-increasing value might point to an application spawning sub-processes but not properly closing them. Investigate using ps faux. process priorityThis shows the distribution of process priorities. Having only high-priority processes is not of much use. Consider de-prioritizing some. cpu usageFairly straight-forward. If this is spiking, you may have an attack going on, or a process is hogging the CPU. Idf it's slowly increasing and approaching max in normal operations, you should consider upgrading your hardware (or load-balancing). file table usageNumber of actively open files. If this is reaching max, you may have a process opening, but not properly releasing files. load averageShows an summarized value for the system load. Should correlate with CPU usage. Increasing values can come from a number of sources. Look for correlations with other graphs. memory usageA graphical representation of you memory. As long as you have a lot of unused+cache+buffers you are fine. swap in/outShows the activity on your swap partition. This should always be 0. If you see activity on this, you should add more memory to your machine!

    Read the article

  • HttpClient multithread performance

    - by pepper
    I have an application which downloads more than 4500 html pages from 62 target hosts using HttpClient (4.1.3 or 4.2-beta). It runs on Windows 7 64-bit. Processor - Core i7 2600K. Network bandwidth - 54 Mb/s. At this moment it uses such parameters: DefaultHttpClient and PoolingClientConnectionManager; Also it hasIdleConnectionMonitorThread from http://hc.apache.org/httpcomponents-client-ga/tutorial/html/connmgmt.html; Maximum total connections = 80; Default maximum connections per route = 5; For thread management it uses ForkJoinPool with the parallelism level = 5 (Do I understand correctly that it is a number of working threads?) In this case my network usage (in Windows task manager) does not rise above 2.5%. To download 4500 pages it takes 70 minutes. And in HttpClient logs I have such things: DEBUG ForkJoinPool-2-worker-1 [org.apache.http.impl.conn.PoolingClientConnectionManager]: Connection released: [id: 209][route: {}-http://stackoverflow.com][total kept alive: 6; route allocated: 1 of 5; total allocated: 10 of 80] Total allocated connections do not raise above 10-12, in spite of that I've set it up to 80 connections. If I'll try to rise parallelism level to 20 or 80, network usage remains the same but a lot connection time-outs will be generated. I've read tutorials on hc.apache.org (HttpClient Performance Optimization Guide and HttpClient Threading Guide) but they does not help. Task's code looks like this: public class ContentDownloader extends RecursiveAction { private final HttpClient httpClient; private final HttpContext context; private List<Entry> entries; public ContentDownloader(HttpClient httpClient, List<Entry> entries){ this.httpClient = httpClient; context = new BasicHttpContext(); this.entries = entries; } private void computeDirectly(Entry entry){ final HttpGet get = new HttpGet(entry.getLink()); try { HttpResponse response = httpClient.execute(get, context); int statusCode = response.getStatusLine().getStatusCode(); if ( (statusCode >= 400) && (statusCode <= 600) ) { logger.error("Couldn't get content from " + get.getURI().toString() + "\n" + response.toString()); } else { HttpEntity entity = response.getEntity(); if (entity != null) { String htmlContent = EntityUtils.toString(entity).trim(); entry.setHtml(htmlContent); EntityUtils.consumeQuietly(entity); } } } catch (Exception e) { } finally { get.releaseConnection(); } } @Override protected void compute() { if (entries.size() <= 1){ computeDirectly(entries.get(0)); return; } int split = entries.size() / 2; invokeAll(new ContentDownloader(httpClient, entries.subList(0, split)), new ContentDownloader(httpClient, entries.subList(split, entries.size()))); } } And the question is - what is the best practice to use multi threaded HttpClient, may be there is a some rules for setting up ConnectionManager and HttpClient? How can I use all of 80 connections and raise network usage? If necessary, I will provide more code.

    Read the article

  • String Sharing/Reference issue with objects in Delphi

    - by jenakai123
    My application builds many objects in memory based on filenames (among other string based information). I was hoping to optimise memory usage by storing the path and filename separately, and then sharing the path between objects in the same path. I wasn't trying to look at using a string pool or anything, basically my objects are sorted so if I have 10 objects with the same path I want objects 2-10 to have their path "pointed" at object 1's path (eg object[2].Path=object[1].Path); I have a problem though, I don't believe that my objects are in fact sharing a reference to the same string after I think I am telling them to (by the object[2].Path=object[1].Path assignment). When I do an experiment with a string list and set all the values to point to the first value in the list I can see the "memory conservation" in action, but when I use objects I see absolutely no change at all, admittedly I am only using task manager (private working set) to watch for memory use changes. Here's a contrived example, I hope this makes sense. I have an object: TfileObject=class(Tobject) FpathPart: string; FfilePart: string; end; Now I create 1,000,000 instances of the object, using a new string for each one: var x: integer; MyFilePath: string; fo: TfileObject; begin for x := 1 to 1000000 do begin // create a new string for every iteration of the loop MyFilePath:=ExtractFilePath(Application.ExeName); fo:=TfileObject.Create; fo.FpathPart:=MyFilePath; FobjectList.Add(fo); end; end; Run this up and task manager says I am using 68MB of memory or something. (Note that if I allocated MyFilePath outside of the loop then I do save memory because of 1 instance of the string, but this is a contrived example and not actually how it would happen in the app). Now I want to "optimise" my memory usage by making all objects share the same instance of the path string, since it's the same value: var x: integer; begin for x:=1 to FobjectList.Count-1 do begin TfileObject(FobjectList[x]).FpathPart:=TfileObject(FobjectList[0]).FpathPart; end; end; Task Manager shows absouletly no change. However if I do something similar with a TstringList: var x: integer; begin for x := 1 to 1000000 do begin FstringList.Add(ExtractFilePath(Application.ExeName)); end; end; Task Manager says 60MB memory use. Now optimise with: var x: integer; begin for x := 1 to FstringList.Count - 1 do FstringList[x]:=FstringList[0]; end; Task Manager shows the drop in memory usage that I would expect, now 10MB. So I seem to be able to share strings in a string list, but not in objects. I am obviously missing something conceptually, in code or both! I hope this makes sense, I can really see the ability to conserve memory using this technique as I have a lot of objects all with lots of string information, that data is sorted in many different ways and I would like to be able to iterate over this data once it is loaded into memory and free some of that memory back up again by sharing strings in this way. Thanks in advance for any assistance you can offer.

    Read the article

  • CodePlex Daily Summary for Tuesday, March 09, 2010

    CodePlex Daily Summary for Tuesday, March 09, 2010New Projects.NET Excel Wrapper - Read, Write, Edit & Automate Excel Files in .NET with ease: .NET Excel Wrapper encapsulates the complexity of working with multiple Excel objects giving you one central point to do all your processing. It h...Advancement Voyage: Advancement Voyage is a high quality RPG experience that provides all the advancement and voyaging that a player could hope for.ASP.Net Routing configuration: ASP.NET routing configuration enables you to configure the routes in the web.config bbinjest: bbinjestBuildUp: BuildUp is a build number increment tool for C# .net projects. It is run as a post build step in Visual Studio.Controlled Vocabulary: This project is devoted to creating tools to assist with Controlling Vocabulary in communication. The initial delivery is an Outlook 2010 Add-in w...CycleList: A replacement for the WPF ListBox Control. Displays only a single item and allows the user to change the selected item by clicking on it once. Very...Forensic Suite: A suite of security softwareFREE DNN Chat Module for 123 Flash Chat -- Embed FREE Chat Room!: 123 Flash Chat is a live chat solution and its DotNetNuke Chat Module helps to embed a live chat room into website with DotNetNuke(DNN) integrated ...HouseFly experimental controls: Experimental controls for use in HouseFly.ICatalogAll: junkMidiStylus: MidiStylus allows you to control MIDI-enabled hardware or software using your pressure-sensitive pen tablet. The program maps the X position, Y po...myTunes: Search for your favorite artistsNColony - Pluggable Socialism?: NColony will maximize the use of MEF to create flexible application architectures through a suite of plug-in solutions. If MEF is an outlet for plu...Network Monitor Decryption Expert: NmDecrypt is a Network Monitor Expert which when given a trace with encrypted frames, a security certificate, and a passkey will create a new trace...occulo: occulo is a free steganography program, meant to embed files within images with optional encrytion. Open Ant: A implementation of a Open Source Ant which is created to show what is possible in the serious game AntMe! The First implementation of that ProjectProgramming Patterns by example: Design patterns provide solutions to common software design problems. This project will contain samples, written in c# and ruby, of each design pat...project4k: Developing bulk mail system storing email informationQuail - Selenium Remote Control Made Easy: Quail makes it easy for Quality Assurance departments write automated tests against web applications. Both HTML and Silverlight applications can b...RedBulb for XNA Framework: RedBulb is a collection of utility functions and classes that make writing games with XNA a lot easier. Key features: Console,GUI (Labels, Buttons,...RegExpress: RegExpress is a WPF application that combines interactive demos of regular expressions with slide content. This was designed for a user group prese...RemoveFolder: Small utility program to remove empty foldersScrumTFS: ScrumTFSSharePoint - Open internal link in new window list definition: A simple SharePoint list definition to render SharePoint internal links with the option to open them in a new window.SqlSiteMap4MVC: SqlSiteMapProvider for ASP.Net MVC.T Sina .NET Client: t.sina.com.cn api 新浪微博APITest-Lint-Extensions: Test Lint is a free Typemock VS 2010 Extension that finds common problems in your unit tests as you type them. this project will host extensions ...ThinkGearNET: ThinkGearNET is a library for easy usage of the Neurosky Mindset headset from .NET .Wiki to Maml: This project enables you to write wiki syntax and have it converted into MAML syntax for Sandcastle documentation projects.WPF Undo/Redo Framework: This project attempts to solve the age-old programmer problem of supporting unlimited undo/redo in an application, in an easily reusable manner. Th...WPFValidators: WPF Validators Validações de campos para WPFWSP Listener: The WSP listener is a windows service application which waits for new WSC and WSP files in a specific folder. If a new WSC and WSP file are added, ...New Releases.NET Excel Wrapper - Read, Write, Edit & Automate Excel Files in .NET with ease: First Release: This is the first release which includes the main library release..NET Excel Wrapper - Read, Write, Edit & Automate Excel Files in .NET with ease: Updated Version: New Features:SetRangeValue using multidimensional array Print current worksheet Print all worksheets Format ranges background, color, alig...ArkSwitch: ArkSwitch v1.1.2: This release removes all memory reporting information, and is focused on stability.BattLineSvc: V2.1: - Fixed a bug where on system start-up, it would pop up a notification box to let you know the service started. Annoying! And fixed! - Fixed the ...BuildUp: BuildUp 1.0 Alpha 1: Use at your own risk!Not yet feature complete. Basic build incrementing and attribute overriding works. Still working on cascading build incremen...Controlled Vocabulary: 1.0.0.1: Initial Alpha Release. System Requirements Outlook 2010 .Net Framework 3.5 Installation 1. Close Outlook (Use Task Manager to ensure no running i...CycleList: CycleList: The binaries contain the .NET 3.5 DLL ONLY. Please download source for usage examples.FluentNHibernate.Search: 0.3 Beta: 0.3 Beta take the following changes : Mappings : - Field Mapping without specifying "Name" - Id Mapping without specifiying "Field" - Builtin Anal...FREE DNN Chat Module for 123 Flash Chat -- Embed FREE Chat Room!: 123 Flash Chat DNN Chat Module: With FREE DotNetNuke Chat Module of 123 Flash Chat, webmaster will be assist to add a chat room into DotNetNuke instantly and help to attract more ...GameStore League Manager: League Manager 1.0 release 3: This release includes a full installer so that you can get your league running faster and generate interest quicker.iExporter - iTunes playlist exporting: iExporter gui v2.3.1.0 - console v1.2.1.0: Paypal donate! Solved a big bug for iExporter ( Gui & Console ) When a track isn't located under the main iTunes library, iExporter would crash! ...jQuery.cssLess: jQuery.cssLess 0.3: New - Removed the dependency from XRegExp - Added comment support (both CSS style and C style) - Optimised it for speed - Added speed test TOD...jQuery.cssLess: jQuery.cssLess 0.4: NEW - @import directive - preserving of comments in the resulting CSS - code refactoring - more class oriented approach TODO - implement operation...MapWindow GIS: MapWindow 6.0 msi (March 8): Rewrote the shapefile saving code in the indexed case so that it uses the shape indices rather than trying to create features. This should allow s...MidiStylus: MidiStylus 0.5.1: MidiStylus Beta 0.5.1 This release contains basic functionality for transmitting MIDI data based on X position, Y position, and pressure value rea...MiniTwitter: 1.09.1: MiniTwitter 1.09.1 更新内容 修正 URL に & が含まれている時に短縮 URL がおかしくなるバグを修正Mosaictor: first executable: .exe file of the app in its current state. Mind you that this will likely be highly unstable due to heaps of uncaught errors.MvcContrib a Codeplex Foundation project: T4MVC: T4MVC is a T4 template that generates strongly typed helpers for ASP.NET MVC. You can download it below, and check out the documention here.N2 CMS: 2.0 beta: Major Changes ASP.NET MVC 2 templates Refreshed management UI LINQ support Performance improvements Auto image resize Upgrade Make a comp...NotesForGallery: ASP.NET AJAX Photo Gallery Control: NotesForGallery 2.0: PresentationNotesForGallery is an open source control on top of the Microsoft ASP.NET AJAX framework for easy displaying image galleries in the as...occulo: occulo 0.1 binaries: Windows binaries. Tested on Windows XP SP2.occulo: occulo 0.1 source: Initial source release.Open NFe: DANFE 1.9.5: Ajuste de layout e correção dos campos de ISS.patterns & practices Web Client Developer Guidance: Web Application Guidance -- March 8th Drop: This iteration we focused on documentation and bug fixes.PoshConsole: PoshConsole 2.0 Beta: With this release, I am refocusing PoshConsole... It will be a PowerShell 2 host, without support for PowerShell 1.0 I have used some of the new P...Quick Performance Monitor: QPerfmon 1.1: Now you can specify different updating frequencies.RedBulb for XNA Framework: Cipher Puzzle (Sample) Creators Club Package: RedBulb Sample Game: Cipher Puzzle http://bayimg.com/image/galgfaacb.jpgRedBulb for XNA Framework: RedBulbStarter (Base Code): This is the code you need to start with. Quick Start Guide: Download the latest version of RedBulb: http://redbulb.codeplex.com/releases/view/415...RoTwee: RoTwee 7.0.0.0 (Alpha): Now this version is under improvement of code structure and may be buggy. However movement of rotation is quite good in this version thanks to clea...SCSI Interface for Multimedia and Block Devices: Release 9 - Improvements and Bug Fixes: Changes I have made in this version: Fixed INQUIRY command timeout problem Lowered ISOBurn's memory usage significantly by not explicitly setting...SharePoint - Open internal link in new window list definition: Open link in new window list definition: First release, with english and italian localization supportSharePoint Outlook Connector: Version 1.2.3.2: Few bug fixing and some ui enhancementsSysI: sysi, release build: Better than ever -- now allows for escalation to adminThe Silverlight Hyper Video Player [http://slhvp.com]: Beta 1: Beta (1.1) The code is ready for intensive testing. I will update the code at least every second day until we are ready to freeze for V1, which wi...Truecrafting: Truecrafting 0.52: fixed several trinkets that broke just before i released 0.51, sorry fixed water elemental not doing anything while summoned if not using glyph o...Truecrafting: Truecrafting 0.53: fixed mp5 calculations when gear contained mp5 and made the formulas more efficient no need to rebuild profiles with this release if placed in th...umbracoSamplePackageCreator (beta): Working Beta: For Visual Studio 2008 creating packages for Umbraco 4.0.3.VCC: Latest build, v2.1.30307.0: Automatic drop of latest buildVCC: Latest build, v2.1.30308.0: Automatic drop of latest buildVOB2MKV: vob2mkv-1.0.3: This is a maintenance update of the VOB2MKV utility. The MKVMUX filter now describes the cluster locations using a separate SeekHead element at th...WPFValidators: WPFValidators 1.0 Beta: Primeira versão do componente ainda em Beta, pode ser utilizada em produção pois esta funcionando bem e as futuras alterações não sofreram muito im...WSDLGenerator: WSDLGenerator 0.0.06: - Added option to generate SharePoint compatible *disco.aspx file. - Changed commandline optionsWSP Listener: WSP Listener version 1.0.0.0: First version of the WSP Listener includes: Easy cop[y paste installation of WSP solutions Extended logging E-mail when installation is finish...Yet another pali text reader: Pali Text Reader App v1.1: new features/updates + search history is now a tab + format codes in dictionary + add/edit terms in the dictionary + pali keyboard inserts symbols...Most Popular ProjectsMetaSharpi4o - Indexed LINQResExBraintree Client LibraryGeek's LibrarySharepoint Feature ManagerConfiguration ManagementOragon Architecture SqlBuilderTerrain Independant Navigating Automaton v2.0WBFS ManagerMost Active ProjectsUmbraco CMSRawrSDS: Scientific DataSet library and toolsBlogEngine.NETjQuery Library for SharePoint Web ServicesFasterflect - A Fast and Simple Reflection APIFarseer Physics Enginepatterns & practices – Enterprise LibraryTeam FTW - Software ProjectIonics Isapi Rewrite Filter

    Read the article

  • Integrate Bing Search API into ASP.Net application

    - by sreejukg
    Couple of months back, I wrote an article about how to integrate Bing Search engine (API 2.0) with ASP.Net website. You can refer the article here http://weblogs.asp.net/sreejukg/archive/2012/04/07/integrate-bing-api-for-search-inside-asp-net-web-application.aspx Things are changing rapidly in the tech world and Bing has also changed! The Bing Search API 2.0 will work until August 1, 2012, after that it will not return results. Shocked? Don’t worry the API has moved to Windows Azure market place and available for you to sign up and continue using it and there is a free version available based on your usage. In this article, I am going to explain how you can integrate the new Bing API that is available in the Windows Azure market place with your website. You can access the Windows Azure market place from the below link https://datamarket.azure.com/ There is lot of applications available for you to subscribe and use. Bing is one of them. You can find the new Bing Search API from the below link https://datamarket.azure.com/dataset/5BA839F1-12CE-4CCE-BF57-A49D98D29A44 To get access to Bing Search API, first you need to register an account with Windows Azure market place. Sign in to the Windows Azure market place site using your windows live account. Once you sign in with your windows live account, you need to register to Windows Azure Market place account. From the Windows Azure market place, you will see the sign in button it the top right of the page. Clicking on the sign in button will take you to the Windows live ID authentication page. You can enter a windows live ID here to login. Once logged in you will see the Registration page for the Windows Azure market place as follows. You can agree or disagree for the email address usage by Microsoft. I believe selecting the check box means you will get email about what is happening in Windows Azure market place. Click on continue button once you are done. In the next page, you should accept the terms of use, it is not optional, you must agree to terms and conditions. Scroll down to the page and select the I agree checkbox and click on Register Button. Now you are a registered member of Windows Azure market place. You can subscribe to data applications. In order to use BING API in your application, you must obtain your account Key, in the previous version of Bing you were required an API key, the current version uses Account Key instead. Once you logged in to the Windows Azure market place, you can see “My Account” in the top menu, from the Top menu; go to “My Account” Section. From the My Account section, you can manage your subscriptions and Account Keys. Account Keys will be used by your applications to access the subscriptions from the market place. Click on My Account link, you can see Account Keys in the left menu and then Add an account key or you can use the default Account key available. Creating account key is very simple process. Also you can remove the account keys you create if necessary. The next step is to subscribe to BING Search API. At this moment, Bing Offers 2 APIs for search. The available options are as follows. 1. Bing Search API - https://datamarket.azure.com/dataset/5ba839f1-12ce-4cce-bf57-a49d98d29a44 2. Bing Search API – Web Results only - https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65 The difference is that the later will give you only web results where the other you can specify the source type such as image, video, web, news etc. Carefully choose the API based on your application requirements. In this article, I am going to use Web Results Only API, but the steps will be similar to both. Go to the API page https://datamarket.azure.com/dataset/8818f55e-2fe5-4ce3-a617-0b8ba8419f65, you can see the subscription options in the right side. And in the bottom of the page you can see the free option Since I am going to use the free options, just Click the Sign Up link for that. Just select I agree check box and click on the Sign Up button. You will get a recipt pagethat detail your subscription. Now you are ready Bing Search API – Web results. The next step is to integrate the API into your ASP.Net application. Now if you go to the Search API page (as well as in the Receipt page), you can see a .Net C# Class Library link, click on the link, you will get a code file named “BingSearchContainer.cs”. In the following sections I am going to demonstrate the use of Bing Search API from an ASP.Net application. Create an empty ASP.Net web application. In the solution explorer, the application will looks as follows. Now add the downloaded code file (“BingSearchContainer.cs”) to the project. Right click your project in solution explorer, Add -> existing item, then browse to the downloaded location, select the “BingSearchContainer.cs” file and add it to the project. To build the code file you need to add reference to the following library. System.Data.Services.Client You can find the library in the .Net tab, when you select Add -> Reference Try to build your project now; it should build without any errors. Add an ASP.Net page to the project. I have included a text box and a button, then a Grid View to the page. The idea is to Search the text entered and display the results in the gridview. The page will look in the Visual Studio Designer as follows. The markup of the page is as follows. In the button click event handler for the search button, I have used the following code. Now run your project and enter some text in the text box and click the Search button, you will see the results coming from Bing, cool. I entered the text “Microsoft” in the textbox and clicked on the button and I got the following results. Searching Specific Websites If you want to search a particular website, you pass the site url with site:<site url name> and if you have more sites, use pipe (|). e.g. The following search query site:microsoft.com | site:adobe.com design will search the word design and return the results from Microsoft.com and Adobe.com See the sample code that search only Microsoft.com for the text entered for the above sample. var webResults = bingContainer.Web("site:www.Microsoft.com " + txtSearch.Text, null, null, null, null, null, null); Paging the results returned by the API By default the BING API will return 100 results based on your query. The default code file that you downloaded from BING doesn’t include any option for this. You can modify the downloaded code to perform this paging. The BING API supports two parameters $top (for number of results to return) and $skip (for number of records to skip). So if you want 3rd page of results with page size = 10, you need to pass $top = 10 and $skip=20. Open the BingSearchContainer.cs in the editor. You can see the Web method in it as follows. public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options) {  In the method signature, I have added two more parameters public DataServiceQuery<WebResult> Web(String Query, String Market, String Adult, Double? Latitude, Double? Longitude, String WebFileType, String Options, int resultCount, int pageNo) { and in the method, you need to pass the parameters to the query variable. query = query.AddQueryOption("$top", resultCount); query = query.AddQueryOption("$skip", (pageNo -1)*resultCount); return query; Note that I didn’t perform any validation, but you need to check conditions such as resultCount and pageCount should be greater than or equal to 1. If the parameters are not valid, the Bing Search API will throw the error. The modified method is as follows. The changes are highlighted. Now see the following code in the SearchPage.aspx.cs file protected void btnSearch_Click(object sender, EventArgs e) {     var bingContainer = new Bing.BingSearchContainer(new Uri(https://api.datamarket.azure.com/Bing/SearchWeb/));     // replace this value with your account key     var accountKey = "your key";     // the next line configures the bingContainer to use your credentials.     bingContainer.Credentials = new NetworkCredential(accountKey, accountKey);     var webResults = bingContainer.Web("site:microsoft.com" +txtSearch.Text , null, null, null, null, null, null,3,2);     lstResults.DataSource = webResults;     lstResults.DataBind(); } The following code will return 3 results starting from second page (by skipping first 3 results). See the result page as follows. Bing provides complete integration to its offerings. When you develop search based applications, you can use the power of Bing to perform the search. Integrating Bing Search API to ASP.Net application is a simple process and without investing much time, you can develop a good search based application. Make sure you read the terms of use before designing the application and decide which API usage is suitable for you. Further readings BING API Migration Guide http://go.microsoft.com/fwlink/?LinkID=248077 Bing API FAQ http://go.microsoft.com/fwlink/?LinkID=252146 Bing API Schema Guide http://go.microsoft.com/fwlink/?LinkID=252151

    Read the article

< Previous Page | 75 76 77 78 79 80 81 82 83 84 85 86  | Next Page >