Search Results

Search found 784 results on 32 pages for 'simplex noise'.

Page 30/32 | < Previous Page | 26 27 28 29 30 31 32  | Next Page >

  • Using Objective-C Blocks

    - by Sean
    Today I was experimenting with Objective-C's blocks so I thought I'd be clever and add to NSArray a few functional-style collection methods that I've seen in other languages: @interface NSArray (FunWithBlocks) - (NSArray *)collect:(id (^)(id obj))block; - (NSArray *)select:(BOOL (^)(id obj))block; - (NSArray *)flattenedArray; @end The collect: method takes a block which is called for each item in the array and expected to return the results of some operation using that item. The result is the collection of all of those results. (If the block returns nil, nothing is added to the result set.) The select: method will return a new array with only the items from the original that, when passed as an argument to the block, the block returned YES. And finally, the flattenedArray method iterates over the array's items. If an item is an array, it recursively calls flattenedArray on it and adds the results to the result set. If the item isn't an array, it adds the item to the result set. The result set is returned when everything is finished. So now that I had some infrastructure, I needed a test case. I decided to find all package files in the system's application directories. This is what I came up with: NSArray *packagePaths = [[[NSSearchPathForDirectoriesInDomains(NSAllApplicationsDirectory, NSAllDomainsMask, YES) collect:^(id path) { return (id)[[[NSFileManager defaultManager] contentsOfDirectoryAtPath:path error:nil] collect:^(id file) { return (id)[path stringByAppendingPathComponent:file]; }]; }] flattenedArray] select:^(id fullPath) { return [[NSWorkspace sharedWorkspace] isFilePackageAtPath:fullPath]; }]; Yep - that's all one line and it's horrid. I tried a few approaches at adding newlines and indentation to try to clean it up, but it still feels like the actual algorithm is lost in all the noise. I don't know if it's just a syntax thing or my relative in-experience with using a functional style that's the problem, though. For comparison, I decided to do it "the old fashioned way" and just use loops: NSMutableArray *packagePaths = [NSMutableArray new]; for (NSString *searchPath in NSSearchPathForDirectoriesInDomains(NSAllApplicationsDirectory, NSAllDomainsMask, YES)) { for (NSString *file in [[NSFileManager defaultManager] contentsOfDirectoryAtPath:searchPath error:nil]) { NSString *packagePath = [searchPath stringByAppendingPathComponent:file]; if ([[NSWorkspace sharedWorkspace] isFilePackageAtPath:packagePath]) { [packagePaths addObject:packagePath]; } } } IMO this version was easier to write and is more readable to boot. I suppose it's possible this was somehow a bad example, but it seems like a legitimate way to use blocks to me. (Am I wrong?) Am I missing something about how to write or structure Objective-C code with blocks that would clean this up and make it clearer than (or even just as clear as) the looped version?

    Read the article

  • Running same powershell script multiple asynchronous times with separate runspace

    - by teqnomad
    Hi, I have a powershell script which is called by a batch script which is called by Trap Receiver (which also passes environment variables) (running on windows 2008). The traps are flushed out at times in sets of 2-4 trap events, and the batch script will echo the trap details for each message to a logfile, but the powershell script on the next line of the batch script will only appear to process the first trap message (the powershell script writes to the same logfile). My interpetation is that the defaultrunspace is common to all iterations of the script running and this is why the others appear to be ignored. I've tried adding "-sta" when I invoke the powershell script using "powershell.exe -command", but this didn't help. I've researched and found a method using C# but I don't know this language, and busy enough learning powershell, so hoping to find a more direct solution especially as interleaving a "wrapper" between batch and powershell will involve passing the environment variables. http://www.codeproject.com/KB/threads/AsyncPowerShell.aspx I've hunted through stackoverflow, and again the only question of similar vein was using C#. Any suggestions welcome. Some script background: The powershell script is actually a modification of a great script found at gregorystrike website - cant post the link as I'm limited to one link but its the one for Lefthand arrays. Lots of mods so it can do multiple targets from one .ini file, taking in the environment variables, and options to run portions of the script interactively with winform. But you can see the gist of the original script. The batch script is pretty basic. The keys things are I'm trying to filter out trap noise using the :~ operator, and I tried -sta option to see if this would compartmentalise the powershell script. set debug=off set CMD_LINE_ARGS="%*" set LHIPAddress="%2" set VARBIND8="%8" shift shift shift shift shift shift shift set CHASSIS="%9" echo %DATE% %TIME% "Trap Received: %LHIPAddress% %CHASSIS% %VARBIND8%" >> C:\Logs\trap_out.txt set ACTION="%VARBIND8:~39,18%" echo %DATE% %TIME% "Action substring is %ACTION%" 2>&1 >> C:\Logs\trap_out.txt if %ACTION%=="Remote Copy Volume" ( echo Prepostlefthand_env_v2.9 >> C:\Logs\trap_out.txt c:\Windows\System32\WindowsPowerShell\v1.0\PowerShell.exe -sta -executionpolicy unrestricted -command " & 'C:\Scripts\prepostlefthand_env_v2.9.ps1' Backupsettings.ini ALL" 2>&1 >> C:\Logs\trap_out.txt ) ELSE ( echo %DATE% %TIME% Action substring is %ACTION% so exiting" 2>&1 >> C:\Logs\trap.out.txt ) exit

    Read the article

  • do I need to close an audio Clip?

    - by Michael
    have an application that processes real-time data and is supposed to beep when a certain event occurs. The triggering event can occur multiple times per second, and if the beep is already playing when another event triggers the code is just supposed to ignore it (as opposed to interrupting the current beep and starting a new one). Here is the basic code: Clip clickClip public void prepareProcess() { super.prepareProcess(); clickClip = null; try { clipFile = new File("C:/WINDOWS/Media/CHIMES.wav"); ais = AudioSystem.getAudioInputStream(clipFile); clickClip = AudioSystem.getClip(); clickClip.open(ais); fileIsLoaded = true; } catch (Exception ex) { clickClip = null; fileIsLoaded = false; } } public void playSound() { if (fileIsLoaded) { if ((clickClip==null) || (!clickClip.isRunning())) { try { clickClip.setFramePosition(0); clickClip.start(); } catch (Exception ex) { System.out.println("Cannot play click noise"); ex.printStackTrace(); } } } The prepareProcess method gets run once in the beginning, and the playSound method is called every time a triggering event occurs. My question is: do I need to close the clickClip object? I know I could add an actionListener to monitor for a Stop event, but since the event occurs so frequently I'm worried the extra processing is going to slow down the real-time data collection. The code seems to run fine, but my worry is memory leaks. The code above is based on an example I found while searching the net, but the example used an actionListener to close the Clip specifically "to eliminate memory leaks that would occur when the stop method wasn't implemented". My program is intended to run for hours so any memory leaks I have will cause problems. I'll be honest: I have no idea how to verify whether or not I've got a problem. I'm using Netbeans, and running the memory profiler just gave me a huge list of things that I don't know how to read. This is supposed to be the simple part of the program, and I'm spending hours on it. Any help would be greatly appreciated! Michael

    Read the article

  • Programming a callback function within a jQuery plugin

    - by ILMV
    I'm writing a jQuery plug-in so I can reuse this code in many places as it is a very well used piece of code, the code itself adds a new line to a table which has been cloned from a hidden row, it continues to perform a load of manipulations on the new row. I'm currently referencing it like this: $(".abc .grid").grid(); But I want to include a callback so each area the plug-in is called from can do something a bit more unique when the row has been added. I've used the jQuery AJAX plug-in before, so have used the success callback function, but cannot understand how the code works in the background. Here's what I want to achieve: $(".abc .grid").grid({ row_added: function() { // do something a bit more specific here } }); Here's my plug-in code (function($){ $.fn.extend({ //pass the options variable to the function grid: function() { return this.each(function() { grid_table=$(this).find('.grid-table > tbody'); grid_hidden_row=$(this).find('.grid-hidden-row'); //console.debug(grid_hidden_row); $(this).find('.grid-add-row').click(function(event) { /* * clone row takes a hidden dummy row, clones it and appends a unique row * identifier to the id. Clone maintains our jQuery binds */ // get the last id last_row=$(grid_table).find('tr:last').attr('id'); if(last_row===undefined) { new_row=1; } else { new_row=parseInt(last_row.replace('row',''),10)+1; } // append element to target, changes it's id and shows it $(grid_table).append($(grid_hidden_row).clone(true).attr('id','row'+new_row).removeClass('grid-hidden-row').show()); // append unique row identifier on id and name attribute of seledct, input and a $('#row'+new_row).find('select, input, a').each(function(id) { $(this).appendAttr('id','_row'+new_row); $(this).replaceAttr('name','_REPLACE_',new_row); }); // disable all the readonly_if_lines options if this is the first row if(new_row==1) { $('.readonly_if_lines :not(:selected)').attr('disabled','disabled'); } }); $(this).find('.grid-remove-row').click(function(event) { /* * Remove row does what it says on the tin, as well as a few other house * keeping bits and pieces */ // remove the parent tr $(this).parents('tr').remove(); // recalculate the order value5 //calcTotal('.net_value ','#gridform','#gridform_total'); // if we've removed the last row remove readonly locks row_count=grid_table.find('tr').size(); console.info(row_count); if(row_count===0) { $('.readonly_if_lines :disabled').removeAttr('disabled'); } }); }); } }); })(jQuery); I've done the usually searching on elgooG... but I seem to be getting a lot of noise with little result, any help would be greatly appreciated. Thanks!

    Read the article

  • C strange array behaviour

    - by LukeN
    After learning that both strncmp is not what it seems to be and strlcpy not being available on my operating system (Linux), I figured I could try and write it myself. I found a quote from Ulrich Drepper, the libc maintainer, who posted an alternative to strlcpy using mempcpy. I don't have mempcpy either, but it's behaviour was easy to replicate. First of, this is the testcase I have #include <stdio.h> #include <string.h> #define BSIZE 10 void insp(const char* s, int n) { int i; for (i = 0; i < n; i++) printf("%c ", s[i]); printf("\n"); for (i = 0; i < n; i++) printf("%02X ", s[i]); printf("\n"); return; } int copy_string(char *dest, const char *src, int n) { int r = strlen(memcpy(dest, src, n-1)); dest[r] = 0; return r; } int main() { char b[BSIZE]; memset(b, 0, BSIZE); printf("Buffer size is %d", BSIZE); insp(b, BSIZE); printf("\nFirst copy:\n"); copy_string(b, "First", BSIZE); insp(b, BSIZE); printf("b = '%s'\n", b); printf("\nSecond copy:\n"); copy_string(b, "Second", BSIZE); insp(b, BSIZE); printf("b = '%s'\n", b); return 0; } And this is its result: Buffer size is 10 00 00 00 00 00 00 00 00 00 00 First copy: F i r s t b = 46 69 72 73 74 00 62 20 3D 00 b = 'First' Second copy: S e c o n d 53 65 63 6F 6E 64 00 00 01 00 b = 'Second' You can see in the internal representation (the lines insp() created) that there's some noise mixed in, like the printf() format string in the inspection after the first copy, and a foreign 0x01 in the second copy. The strings are copied intact and it correctly handles too long source strings (let's ignore the possible issue with passing 0 as length to copy_string for now, I'll fix that later). But why are there foreign array contents (from the format string) inside my destination? It's as if the destination was actually RESIZED to match the new length.

    Read the article

  • Lighttpd + fastcgi + python (for django) slow on first request

    - by EagleOne
    I'm having a problem with a django website I host with lighttpd + fastcgi. It works great but it seems that the first request always takes up to 3seconds. Subsequent requests are much faster (<1s). I activated access logs in lighttpd in order to track the issue. But I'm kind of stuck. Here are logs where I 'lose' 4s (from 10:04:17 to 10:04:21): 2012-12-01 10:04:17: (mod_fastcgi.c.3636) handling it in mod_fastcgi 2012-12-01 10:04:17: (response.c.470) -- before doc_root 2012-12-01 10:04:17: (response.c.471) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.472) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.473) Path : 2012-12-01 10:04:17: (response.c.521) -- after doc_root 2012-12-01 10:04:17: (response.c.522) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.523) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.524) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:17: (response.c.541) -- logical -> physical 2012-12-01 10:04:17: (response.c.542) Doc-Root : /var/www 2012-12-01 10:04:17: (response.c.543) Rel-Path : /finderauto.fcgi 2012-12-01 10:04:17: (response.c.544) Path : /var/www/finderauto.fcgi 2012-12-01 10:04:21: (response.c.128) Response-Header: HTTP/1.1 200 OK Last-Modified: Sat, 01 Dec 2012 09:04:21 GMT Expires: Sat, 01 Dec 2012 09:14:21 GMT Content-Type: text/html; charset=utf-8 Cache-Control: max-age=600 Transfer-Encoding: chunked Date: Sat, 01 Dec 2012 09:04:21 GMT Server: lighttpd/1.4.28 I guess that if there is a problem, it's whith my configuration. So here is the way I launch my django app: python manage.py runfcgi method=threaded host=127.0.0.1 port=3033 And here is my lighttpd conf: server.modules = ( "mod_access", "mod_alias", "mod_compress", "mod_redirect", "mod_rewrite", "mod_fastcgi", "mod_accesslog", ) server.document-root = "/var/www" server.upload-dirs = ( "/var/cache/lighttpd/uploads" ) server.errorlog = "/var/log/lighttpd/error.log" server.pid-file = "/var/run/lighttpd.pid" server.username = "www-data" server.groupname = "www-data" accesslog.filename = "/var/log/lighttpd/access.log" debug.log-request-header = "enable" debug.log-response-header = "enable" debug.log-file-not-found = "enable" debug.log-request-handling = "enable" debug.log-timeouts = "enable" debug.log-ssl-noise = "enable" debug.log-condition-cache-handling = "enable" debug.log-condition-handling = "enable" fastcgi.server = ( "/finderauto.fcgi" => ( "main" => ( # Use host / port instead of socket for TCP fastcgi "host" => "127.0.0.1", "port" => 3033, #"socket" => "/home/finderadmin/finderauto.sock", "check-local" => "disable", "fix-root-scriptname" => "enable", ) ), ) alias.url = ( "/media" => "/home/user/django/contrib/admin/media/", ) url.rewrite-once = ( "^(/media.*)$" => "$1", "^/favicon\.ico$" => "/media/favicon.ico", "^(/.*)$" => "/finderauto.fcgi$1", ) index-file.names = ( "index.php", "index.html", "index.htm", "default.htm", " index.lighttpd.html" ) url.access-deny = ( "~", ".inc" ) static-file.exclude-extensions = ( ".php", ".pl", ".fcgi" ) ## Use ipv6 if available #include_shell "/usr/share/lighttpd/use-ipv6.pl" dir-listing.encoding = "utf-8" server.dir-listing = "enable" compress.cache-dir = "/var/cache/lighttpd/compress/" compress.filetype = ( "application/x-javascript", "text/css", "text/html", "text/plain" ) include_shell "/usr/share/lighttpd/create-mime.assign.pl" include_shell "/usr/share/lighttpd/include-conf-enabled.pl" If any of you could help me finding out where I lose these 3 or 4 s. I would much appreciate. Thanks in advance!

    Read the article

  • How to check total cache size using a program

    - by user1888541
    so I'm having some trouble creating a program to measure cache size in C. I understand the basic concept of going about this but I'm still having trouble figuring out exactly what I am doing wrong. Basically, I create an array of varying length (going by power of 2s) and access each element in the array and put it in a dummy variable. I go through the array and do this around 1000 times to negate the "noise" that would otherwise occur if I only did it once to get an accurate measurement for time. Then, I look for the size that causes a big jump in access time. Unfortunately, this is where I am having my problem, I don't see this jump using my code and clearly I am doing something wrong. Another thing is that I used /proc/cpuinfo to check the cache and it said the size was 6114 but that was not a power of 2. I was told to go by powers of 2 to figure out the cache can anyone explain why this is? Here is the just of my code...I will post the rest if need be { struct timeval start; struct timeval end; // int n = 1; // change this to test different sizes int array_size = 1048576*n; // I'm trying to check the time "manually" first before creating a loop for the program to do it by itself this is why I have a separate "n" variable to increase the size char x = 0; int i =0, j=0; char *a; a =malloc(sizeof(char) * (array_size)); gettimeofday(&start,NULL); for(i=0; i<1000; i++) { for(j=0; j < array_size; j += 1) { x = a[j]; } } gettimeofday(&end,NULL); int timeTaken = (end.tv_sec * 1000000 + end.tv_usec) - (start.tv_sec *1000000 + start.tv_usec); printf("Time Taken: %d \n", timeTaken); printf("Average: %f \n", (double)timeTaken/((double)array_size); }

    Read the article

  • Terminating a long-executing thread and then starting a new one in response to user changing parameters via UI in an applet

    - by user1817170
    I have an applet which creates music using the JFugue API and plays it for the user. It allows the user to input a music phrase which the piece will be based on, or lets them choose to have a phrase generated randomly. I had been using the following method (successfully) to simply stop and start the music, which runs in a thread using the Player class from JFugue. I generate the music using my classes and user input from the applet GUI...then... private playerThread pthread; private Thread threadPlyr; private Player player; (from variables declaration) public void startMusic(Pattern p) // pattern is a JFugue object which holds the generated music { if (pthread == null) { pthread = new playerThread(); } else { pthread = null; pthread = new playerThread(); } if (threadPlyr == null) { threadPlyr = new Thread(pthread); } else { threadPlyr = null; threadPlyr = new Thread(pthread); } pthread.setPattern(p); threadPlyr.start(); } class playerThread implements Runnable // plays midi using jfugue Player { private Pattern pt; public void setPattern(Pattern p) { pt = p; } @Override public void run() { try { player.play(pt); // takes a couple mins or more to execute resetGUI(); } catch (Exception exception) { } } } And the following to stop music when user presses the stop/start button while Player.isPlaying() is true: public void stopMusic() { threadPlyr.interrupt(); threadPlyr = null; pthread = null; player.stop(); } Now I want to implement a feature which will allow the user to change parameters while the music is playing, create an updated music pattern, and then play THAT pattern. Basically, the idea is to make it simulate "real time" adjustments to the generated music for the user. Well, I have been beating my head against the wall on this for a couple of weeks. I've read all the standard java documentation, researched, read, and searched forums, and I have tried many different ideas, none of which have succeeded. The problem I've run into with all approaches I've tried is that when I start the new thread with the new, updated musical pattern, all the old threads ALSO start, and there is a cacophony of unintelligible noise instead of my desired output. From what I've gathered, the issue seems to be that all the methods I've come across require that the thread is able to periodically check the value of a "flag" variable and then shut itself down from within its "run" block in response to that variable. However, since my thread makes a call that takes several minutes minimum to execute (playing the music), and I need to terminate it WHILE it is executing this, there is really no safe way to do so. So, I'm wondering if there is something I'm missing when it comes to threads, or if perhaps I can accomplish my goal using a totally different approach. Any ideas or guidance is greatly appreciated! Thank you!

    Read the article

  • How to diagnose computer lockup/freezing problem

    - by Scott Mitchell
    I built a desktop computer a couple years back with the following specs: CPU: Intel Core 2 Quad Q9300 Yorkfield 2.5GHz 6MB L2 Cache LGA 775 95W Quad-Core Processor BX80580Q9300 Motherboard: EVGA 122-CK-NF68-T1 LGA 775 NVIDIA nForce 680i SLI ATX Intel Motherboard Video Card: Two EVGA 256-P2-N758-TR GeForce 8600GT SCC 256MB 128-bit GDDR3 PCI Express x16 SLI Supported Video Card PSU: SeaSonic S12 Energy Plus SS-550HT 550W ATX12V V2.3 / EPS12V V2.91 SLI Certified CrossFire Ready 80 PLUS Certified Active PFC Power Supply Memory: Two G.SKILL 4GB (2 x 2GB) 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) Dual Channel Kit Desktop Memory Model F2-6400CL5D-4GBPQ Since its inception, the machine has periodically locked up, the regularlity having varied over the years from once a day to once a month. Typically, lockups happen once every few days. By "lockup" I mean my computer just freezes. The screen locks up, I can't move the mouse. Hitting keys on my keyboard that normally turn LEDs on or off on the keyboard (such as Caps Lock) no longer turn the LEDs on or off. If there was music playing at the time of the lockup, noise keeps coming out of the speakers, but it's just the current frequency/note that plays indefinitely. There is no BSOD. When such a lockup occurs I have to do a hard reboot by either turning off the computer or hitting the reset button. I have the most recent version of the NVIDIA hardware drivers, and update them semi-regularly, but that hasn't seemed to help. I am currently using Windows 7 x64, but was previously using Windows Server 2003 x64 and having the same lockup issues. My guess is that it's somehow video driver or motherboard related, but I don't know how to go about diagnosing this problem to narrow down which of the two is the culprit. Additional information re: cooling Regarding cooling... I've not installed any after-market cooling systems aside from two regular fans I scavenged from an older computer. The fan atop the CPU is the one that shipped with it. One of the two scavenged fans I added it located at the bottom tower of the corner, in an attempt to create some airflow from front to back. The second fan is pointed directly at the two video cards. SpeedFan installation and readings Per studiohack's suggestion, I installed SpeedFan, which provided the following temperature readings: GPU: 63C GPU: 65C System: 76C CPU: 64C AUX: 36C Core 0: 78C Core 1: 76C Core 2: 79C Core 3: 79C Update #3: Another Lockup :-( Well, I had another lockup last night. :-( SpeedFan reported the CPU temp at 38 C when it happened, and there was no spike in temperature leading up to the freeze. One thing I notice is that the freeze seems more likely to happen if I am watching a video. In fact, of the last 5 freezes over the past month, 4 of them have been while watching a video on Flickr. Not necessarily the same video, but a video nevertheless. I don't know if this is just coincidence or if it means anything. (As an aside, each night before bedtime my 2 year old daughter sits on my lap and watches some home videos on Flickr and, in the last month, has learned the phrase, "Uh oh, computer broke.") Update #4: MemTest86 and 3DMark06 Test Results: Per suggestions in the comments, I ran the MemTest86 overnight and it cycled through the 8 GB of memory 5 times without error. I also ran the 3DMark06 test without a problem (see my scores at http://3dmark.com/3dm06/15163549). So... what now? :-) Any further suggestions on what to check? Is there some way to get a stack trace or something when the computer locks like that? Thanks

    Read the article

  • Bizarre and very specific Internet connection loss

    - by Synetech
    Yesterday (Friday, September 21, 2012), my Internet connection started acting up. After some testing, I confirmed a very specific and baffling set of symptoms: Internet connection goes away every 25-35 minutes (I did not confirm the exact interval, but it seems to be about 30 mins.) Only some protocols are affected; HTTP*, P2P, etc. stop working; FTP, etc. continue to work When it’s stopped, cannot even ping router or cable-modem IPs or view their firmware pages Domain-names and IPs are irrelevant (for protocols that stop working, neither work, for those that still work, both work) Resetting router fixes it for another 30 minutes Keeping the connection idle or active doesn’t seem to make a difference (nor the bandwidth usage in that period) Connecting directly to cable-modem allows it to work indefinitely Disconnecting the router from the cable-modem works indefinitely (no Internet connection obviously, but can still access router IP and firmware page) Connecting the router to the cable-modem, but putting the modem on standby also works indefinitely Same problem with both a wireless laptop and wired (on any port) desktop (both Windows 7; will try to test Windows XP when possible) Nothing had changed in the days leading up to the issue. No modifications to the networking configuration or the router; there were not even any Windows updates except for an MSSE definition update. Waiting does not fix it, nor does any amount of fiddling with anything; only resetting the router fixes it for 30 minutes (resetting the cable-modem doesn't work either) I tried cleaning the pins in the router’s plugs, but that didn’t help, which was not really a surprise since I was not getting a lost connection error. Obviously my first thought was that the router was having a problem, and this is borne out by some tests. The problem is that when it drops, it is not a full drop since I can still do things like ftp ftp.mcafee.com and such which means that the connection and DNS are still working. Moreover, if it were the router, then why does it stay alive indefinitely when not connected to the cable-modem (i.e., no outside influence)? The problem doesn't seem to be either the cable-modem nor the router, but rather an interaction between the two, like something from the outside (port scan? hacker? ISP?) that is triggering a problem in the router. I see that there have been a couple of vulnerabilities for the DI-524, but those were a while back and should be fixed since I have the last firmware for it. I don’t think it’s my ISP (Rogers) since I have been using the router for several years without problem and can connect indefinitely when bypassing it. But I can’t rule them out since that is one of the only possible things that could have suddenly changed. Does anybody have any ideas of explanations, fixed, or tests? (I note that when I opened the router, I heard a very high-pitched noise from somewhere near the capacitors/ferrite ring which I don’t think I heard the last time I opened it a few years ago, but then if it were that, then why would it affect only a very small, specific set of functions?)

    Read the article

  • Server Cabinet/Room Cooling

    - by user37226
    Hello all. I currently have two desktops and three servers in my office sitting on the floor (I know this is bad). With that many servers the ambient temperature in the room goes up quickly. I am located in Dallas, TX so during the winter, if the heat is kept low, it is not a problem, but during the summer it easily jumps the room +10 degrees. I have decided and found a free 42U server cabinet that a hosting company was throwing away to house all of these systems in. One server is in a rack mount case while the other four servers are housed in mid-tower cases. I have purchased shelves for each computer and plan to lay the towers side ways on these shelves (as replacing the cases costs a heck of a lot of money). I like the idea of housing all of these systems in the cabinet because it will save a lot of room and clean up all of the cabling currently laying all over the office floor. When putting this setup together over the next couple of weeks, I want to address issues with dust and cooling. The server cabinet has a fan on top, front plexiglass door and a rear metal door with vent wholes on the bottom. First the cooling issues. I know I am going to want to have cool air enter the bottom of the cabinet and exit the top. I do not want the room heating up though as this will make my work area hot and then make the servers warmer as the air eventually reenters the cabinet. I had an idea to fix this problem, but am unsure if it will work. I was thinking of taking flexible piping and adapting it to the back fans of the computer having the other end of the pipe at the top close to the cabinet's top mounted fan. I was then thinking of creating a duct around the top fan into the attic. Now I am very concerned that the attic will cause issues with this type of setup because during July/August time frame, the attic is easily 120 degrees F. I could also use the flexible pipe to take it to an attic exhaust vent if it would be better to vent it into the 100 degree air outside (at least there may be wind. The other option would be to buy a small portable air conditioner. This may be a possibility, but do I want to spend the extra money on power? I bet this increases the noise. Plus they are around $250 on Amazon. What would you all recommend? Depending on the solution I end up running with above, I would also like to limit the dust that gets into the cabinet. If I were to cut a whole and mount a second cabinet fan on the bottom of the rear door, could I possibly mount a standard home air filter on the other side of that whole? Thanks in advance for your recommendations. I look forward to reading your interesting ideas.

    Read the article

  • Server Cabinet/Room Cooling

    - by user37226
    Hello all. I currently have two desktops and three servers in my office sitting on the floor (I know this is bad). With that many servers the ambient temperature in the room goes up quickly. I am located in Dallas, TX so during the winter, if the heat is kept low, it is not a problem, but during the summer it easily jumps the room +10 degrees. I have decided and found a free 42U server cabinet that a hosting company was throwing away to house all of these systems in. One server is in a rack mount case while the other four servers are housed in mid-tower cases. I have purchased shelves for each computer and plan to lay the towers side ways on these shelves (as replacing the cases costs a heck of a lot of money). I like the idea of housing all of these systems in the cabinet because it will save a lot of room and clean up all of the cabling currently laying all over the office floor. When putting this setup together over the next couple of weeks, I want to address issues with dust and cooling. The server cabinet has a fan on top, front plexiglass door and a rear metal door with vent wholes on the bottom. First the cooling issues. I know I am going to want to have cool air enter the bottom of the cabinet and exit the top. I do not want the room heating up though as this will make my work area hot and then make the servers warmer as the air eventually reenters the cabinet. I had an idea to fix this problem, but am unsure if it will work. I was thinking of taking flexible piping and adapting it to the back fans of the computer having the other end of the pipe at the top close to the cabinet's top mounted fan. I was then thinking of creating a duct around the top fan into the attic. Now I am very concerned that the attic will cause issues with this type of setup because during July/August time frame, the attic is easily 120 degrees F. I could also use the flexible pipe to take it to an attic exhaust vent if it would be better to vent it into the 100 degree air outside (at least there may be wind. The other option would be to buy a small portable air conditioner. This may be a possibility, but do I want to spend the extra money on power? I bet this increases the noise. Plus they are around $250 on Amazon. What would you all recommend? Depending on the solution I end up running with above, I would also like to limit the dust that gets into the cabinet. If I were to cut a whole and mount a second cabinet fan on the bottom of the rear door, could I possibly mount a standard home air filter on the other side of that whole? Thanks in advance for your recommendations. I look forward to reading your interesting ideas.

    Read the article

  • Determining the required depth and specifications for a server cabinet

    - by Bingu Bingme
    I'm trying to understand the considerations ("why") that go into determining the specifications ("what") for a rackmount server cabinet, in order to determine what sort of rack I should purchase for my home use. Since this is for home use, I won't be following certain best practices (eg. hot/cold aisle, not even air conditioning) and may be willing to sacrifice in various areas in order to reduce cost and footprint - but please advise if there are safety concerns or other considerations to note. The most basic specs for a server cabinet are the dimensions (external width x external depth x usable height). Width: commonly 600mm or 800mm (if the use case requires extra clearance around the sides, such as if there is lots of cabling). In my case and most common cases, I'm going to stick with 600mm. Height: Select a sufficiently tall rack to fit my equipment. But how much may I stuff into it? Eg, if there is a 15U rack, can I really populate it with 15U of servers, or should I leave 1U at top and bottom for air circulation? Depth: Racks commonly have external depth of 600mm (network equipment), 800mm, 1000mm, or even longer. I'm trying to see how to fit into the 800mm depth. With reference to http://www.server-racks.com/rack-mount-depth.html, I'm hoping to have the front and rear posts mounted ~ 28.5" (72cm) apart, which would leave only 8cm for front space and rear space. How much rear space (from rear posts to back of rack) do I really need? I won't use cable management arms, so can I mount a 72cm depth server since the power, KVM, network cables won't take up much depth? My most important equipment are all < 60cm depth (4U chassis) and should comfortably fit within the 800mm cabinet. The rest of the equipment are very old 1U servers that range from 65-72cm depth. I might still want to make further use of them, or I might discard them since they are so old. Even if the 72cm servers cannot be powered on in an 800mm rack, I should be able to use them as 1U shelves. But, what server depth can I expect to be able to operate? Or am I forced to upgrade to 1000mm depth racks in order to use any servers deeper than 60cm? With reference to best practices for HP racks, some other specs and installation considerations: There aren't any minimum recommendations for clearance on the sides of the rack. It is recommended to leave 48" front clearance. The 48" front clearance is based on 32" chassis depth, 13" to extend the rack rails and mate the inner/outer rails, and 3" for movement. If I don't use such rails (eg, use shelves instead), it should be sufficient to leave front clearance of chassis depth + 3". It is recommended to leave 30" rear clearance "to provide space for servicing the rack". I'm planning to back the rack into a corner of the room, and wheel it slightly out when I need to access the rear. If the wheeling plan is ok, I still need to know how much rear clearance is required for air circulation and ventilation purposes. Castor wheels and stabilising feet. Since I'm backing the rack into a corner of the room, I'll only be able to set the stabilising feet on the front corners. Thoughts on safety? The rack that I'm considering has front glass doors with side ventilation slits and fully perforated rear doors. I'm hoping this will be a good balance between temperature and noise (only ventilation slits facing out the front, while the rear is facing the walls). Or is the sound of high-rpm fans going to escape through the front slits anyway and destroy my sanity?

    Read the article

  • How to diagnose computer lockups and freezes?

    - by Scott Mitchell
    I built a desktop computer a couple years back with the following specs: CPU: Intel Core 2 Quad Q9300 Yorkfield 2.5GHz 6 MB L2 Cache LGA 775 95W Quad-Core Processor BX80580Q9300 Motherboard: EVGA 122-CK-NF68-T1 LGA 775 NVIDIA nForce 680i SLI ATX Intel Motherboard Video Card: Two EVGA 256-P2-N758-TR GeForce 8600GT SCC 256 MB 128-bit GDDR3 PCI Express x16 SLI Supported Video Card PSU: SeaSonic S12 Energy Plus SS-550HT 550W ATX12V V2.3 / EPS12V V2.91 SLI Certified CrossFire Ready 80 PLUS Certified Active PFC Power Supply Memory: Two G.SKILL 4 GB (2 x 2 GB) 240-Pin DDR2 SDRAM DDR2 800 (PC2 6400) Dual Channel Kit Desktop Memory Model F2-6400CL5D-4GBPQ Since its inception, the machine has periodically locked up, the regularity having varied over the years from once a day to once a month. Typically, lockups happen once every few days. By "lockup" I mean my computer just freezes. The screen locks up, I can't move the mouse. Hitting keys on my keyboard that normally turn LEDs on or off on the keyboard (such as Caps Lock) no longer turn the LEDs on or off. If there was music playing at the time of the lockup, noise keeps coming out of the speakers, but it's just the current frequency/note that plays indefinitely. There is no BSOD. When such a lockup occurs I have to do a hard reboot by either turning off the computer or hitting the reset button. I have the most recent version of the NVIDIA hardware drivers, and update them semi-regularly, but that hasn't seemed to help. I am currently using Windows 7 x64, but was previously using Windows Server 2003 x64 and having the same lockup issues. My guess is that it's somehow video driver or motherboard related, but I don't know how to go about diagnosing this problem to narrow down which of the two is the culprit. Additional information re: cooling Regarding cooling... I've not installed any after-market cooling systems aside from two regular fans I scavenged from an older computer. The fan atop the CPU is the one that shipped with it. One of the two scavenged fans I added it located at the bottom tower of the corner, in an attempt to create some airflow from front to back. The second fan is pointed directly at the two video cards. SpeedFan installation and readings Per studiohack's suggestion, I installed SpeedFan, which provided the following temperature readings: GPU: 63C GPU: 65C System: 76C CPU: 64C AUX: 36C Core 0: 78C Core 1: 76C Core 2: 79C Core 3: 79C Update #3: Another Lockup :-( Well, I had another lockup last night. :-( SpeedFan reported the CPU temp at 38 C when it happened, and there was no spike in temperature leading up to the freeze. One thing I notice is that the freeze seems more likely to happen if I am watching a video. In fact, of the last 5 freezes over the past month, 4 of them have been while watching a video on Flickr. Not necessarily the same video, but a video nevertheless. I don't know if this is just coincidence or if it means anything. (As an aside, each night before bedtime my 2 year old daughter sits on my lap and watches some home videos on Flickr and, in the last month, has learned the phrase, "Uh oh, computer broke.") Update #4: MemTest86 and 3DMark06 Test Results: Per suggestions in the comments, I ran the MemTest86 overnight and it cycled through the 8 GB of memory 5 times without error. I also ran the 3DMark06 test without a problem (see my scores at http://3dmark.com/3dm06/15163549). So... what now? :-) Any further suggestions on what to check? Is there some way to get a stack trace or something when the computer locks like that? Resolution I have never did figure out the particular problems, but based on the suggestions here and elsewhere, I'm presuming it was a motherboard issue. In any event, I recently upgraded my system, buying a new motherbeard, PSU, CPU, and RAM, and that new rig has been working splendidly the past several weeks. I am using the same graphic cards as in the old setup, so I think it's safe to reason that they weren't the cause of the problem.

    Read the article

  • .AVI Files randomly cease to open, other strange errors too

    - by Ben Franchuk
    I Recently (a couple weeks ago) downloaded the complete series of Seinfeld, all in varying file type. I Watched them in sequence according to season and to airing date, and all was well. All of the files played fine with my media player of choice ("BS Player"), and once I had finished, I went onto watch some other TV I had previously downloaded (The U.S. Series of "The Office"), and after then, some other film and then some music, over the following weeks (keep in mind all of these files are all on the same Hard Drive). Later then, More recently, I Went back to watching Seinfeld. The episodes played well as they did before- with the exclusion of a few in Season 7. I Have not tested all of the episodes in the season, but upon inspection, the majority of them are experiencing this problem; the problem being simply that they don't open! BS Player says that the files are either damaged or that the codecs to play the files are not on my computer-- however I am certain that the files DO have the codecs, and I am pretty sure that they are NOT DAMAGED either. I Have played the files with other players (such as VLC, Media Player Classic, and Windows Media Player), too, only to the same result; of them not opening. Seemingly the only way that I can differentiate between a damaged file and a non-damaged file are the way that the icon shows in Windows Explorer. For example, the below image is how explorer shows the information of a file that is non-damaged... ...and below is how a damaged file appears... The most disturbing and confusing part of this, though, is the last episode in the season- It opens, but not as a video- Instead, as a 1 Hour, 16 Minute, and 35 Second Audio file! The file plays a song for the first 4 or so minutes, and then is pretty much silent (except for some extremely quiet noise) until the last minute or so, when a random array of chopped up sounds and beeping noises play. I Do not recognise the song at the beginning of the file, but by the sounds of it, it is a song by the artist "Mr. Oizo," who's complete works I downloaded a couple weeks before now; and a bit before then I had finished downloading season 9 (not affected by these problems) of Seinfeld. I'd also like to note that the file I told of earlier (which played audio instead of video) reads as the same size as the other files in the season (around 175 MB) and also opens as a video clip. I Have NEVER experienced any of these problems in the past, and they seem to be only effecting the one season of my downloaded TV. The problems have not arisen with any of the other files on my Hard Drive, or any of the files downloaded around the time or after the time of which I downloaded season 7 of Seinfeld- or at least to my noticing. I Use the hard drive these files are located on almost every day, so could that be the cause of these problems? Is this a sign that my HDD is soon going to die? If it helps, the HDD is a Western Digital MyBook 1.5 TB 7500 RPM. It is connected to the computer via U.S.B. 2.0. EDIT! I noticed that this problem is now occurring with Season 9 of Seinfeld- and, presumably, other files on the drive I have yet to check. Please, If you have ANY IDEA AT ALL on what may be causing this or how to fix it, do tell me!

    Read the article

  • Do hard drive enclosures fail/is it the HDD or enclosure?

    - by x0a
    I'm having a whole host of problems with an external hard drive that was working just fine a couple of hours ago. I've had this problem before once, and that was about 3 months ago, here's what I documented: So a couple of hours ago I turned off all my computers and shut off the power to all my devices in my room, then went and turned the power off at the main switch so I could change an outlet. A couple hours later, after I've already slowly turned everything back on, I go to my xbox to try and watch a movie and it can't seem to list any of the movies I've got. So I go to my desktop to find that my external hard drive isn't there.. even though it's on and connected. It's also stationary and hidden behind something so there's not a whole lot of tampering/physical wear to that external. I plug it into my laptop to try and see what's going on. It starts making this endless loud screeching noise. None of that clicking that's usually associated with hd damage. It's not listed in my computers, and it shows up in Disk Management as "uninitialized" asking me to choose between two different partition types. After carefully disconnecting it and connecting it back, it asks me to format it, which I cancel. I start googling about my issue, starting to accept the situation, torn as hell and helpless and just about ready to toss the thing. Suddenly the screeching stops, after almost 45 minutes of it going, and Disk Management lists the drive as "Online" and "Healthy". Explorer pops up with all my files! I'm still being really careful with it and weary and treating it as though it's in fragile shape. I've downloaded some S.M.A.R.T. software to read the values and everything is listed as "OK" . No reallocated sectors, no read errors, no seek errors. I also ran a quick self-test, which completed without error. Everything seems fine. It looks to be a perfectly healthy external hard drive. So what the hell was that about? Was it doing some sort of maintenance or self-test? How am I supposed to tell the difference? I would've undoubtedly killed the drive for sure if had it gone on a bit longer. I've got the same problem now, with one exception: it doesn't magically reappear after the screeching stops. Occasionally I manage to get some S.M.A.R.T. diagnostics information, which basically reads everything as fine. The only problem is that my HD isn't initializing (so I can't access anything in it). I'm able to successfully run a quick smart test but not an extended one (I've only tried it once but got conflicting indications as to whether it was actually making any progress or not (was stuck on Random read test). So, final question (if all else fails): Could the hard drive enclosure be failing rather than the HDD? Is this a likely possibility at all? How would I know?

    Read the article

  • Using a "white list" for extracting terms for Text Mining

    - by [email protected]
    In Part 1 of my post on "Generating cluster names from a document clustering model" (part 1, part 2, part 3), I showed how to build a clustering model from text documents using Oracle Data Miner, which automates preparing data for text mining. In this process we specified a custom stoplist and lexer and relied on Oracle Text to identify important terms.  However, there is an alternative approach, the white list, which uses a thesaurus object with the Oracle Text CTXRULE index to allow you to specify the important terms. INTRODUCTIONA stoplist is used to exclude, i.e., black list, specific words in your documents from being indexed. For example, words like a, if, and, or, and but normally add no value when text mining. Other words can also be excluded if they do not help to differentiate documents, e.g., the word Oracle is ubiquitous in the Oracle product literature. One problem with stoplists is determining which words to specify. This usually requires inspecting the terms that are extracted, manually identifying which ones you don't want, and then re-indexing the documents to determine if you missed any. Since a corpus of documents could contain thousands of words, this could be a tedious exercise. Moreover, since every word is considered as an individual token, a term excluded in one context may be needed to help identify a term in another context. For example, in our Oracle product literature example, the words "Oracle Data Mining" taken individually are not particular helpful. The term "Oracle" may be found in nearly all documents, as with the term "Data." The term "Mining" is more unique, but could also refer to the Mining industry. If we exclude "Oracle" and "Data" by specifying them in the stoplist, we lose valuable information. But it we include them, they may introduce too much noise. Still, when you have a broad vocabulary or don't have a list of specific terms of interest, you rely on the text engine to identify important terms, often by computing the term frequency - inverse document frequency metric. (This is effectively a weight associated with each term indicating its relative importance in a document within a collection of documents. We'll revisit this later.) The results using this technique is often quite valuable. As noted above, an alternative to the subtractive nature of the stoplist is to specify a white list, or a list of terms--perhaps multi-word--that we want to extract and use for data mining. The obvious downside to this approach is the need to specify the set of terms of interest. However, this may not be as daunting a task as it seems. For example, in a given domain (Oracle product literature), there is often a recognized glossary, or a list of keywords and phrases (Oracle product names, industry names, product categories, etc.). Being able to identify multi-word terms, e.g., "Oracle Data Mining" or "Customer Relationship Management" as a single token can greatly increase the quality of the data mining results. The remainder of this post and subsequent posts will focus on how to produce a dataset that contains white list terms, suitable for mining. CREATING A WHITE LIST We'll leverage the thesaurus capability of Oracle Text. Using a thesaurus, we create a set of rules that are in effect our mapping from single and multi-word terms to the tokens used to represent those terms. For example, "Oracle Data Mining" becomes "ORACLEDATAMINING." First, we'll create and populate a mapping table called my_term_token_map. All text has been converted to upper case and values in the TERM column are intended to be mapped to the token in the TOKEN column. TERM                                TOKEN DATA MINING                         DATAMINING ORACLE DATA MINING                  ORACLEDATAMINING 11G                                 ORACLE11G JAVA                                JAVA CRM                                 CRM CUSTOMER RELATIONSHIP MANAGEMENT    CRM ... Next, we'll create a thesaurus object my_thesaurus and a rules table my_thesaurus_rules: CTX_THES.CREATE_THESAURUS('my_thesaurus', FALSE); CREATE TABLE my_thesaurus_rules (main_term     VARCHAR2(100),                                  query_string  VARCHAR2(400)); We next populate the thesaurus object and rules table using the term token map. A cursor is defined over my_term_token_map. As we iterate over  the rows, we insert a synonym relationship 'SYN' into the thesaurus. We also insert into the table my_thesaurus_rules the main term, and the corresponding query string, which specifies synonyms for the token in the thesaurus. DECLARE   cursor c2 is     select token, term     from my_term_token_map; BEGIN   for r_c2 in c2 loop     CTX_THES.CREATE_RELATION('my_thesaurus',r_c2.token,'SYN',r_c2.term);     EXECUTE IMMEDIATE 'insert into my_thesaurus_rules values                        (:1,''SYN(' || r_c2.token || ', my_thesaurus)'')'     using r_c2.token;   end loop; END; We are effectively inserting the token to return and the corresponding query that will look up synonyms in our thesaurus into the my_thesaurus_rules table, for example:     'ORACLEDATAMINING'        SYN ('ORACLEDATAMINING', my_thesaurus)At this point, we create a CTXRULE index on the my_thesaurus_rules table: create index my_thesaurus_rules_idx on        my_thesaurus_rules(query_string)        indextype is ctxsys.ctxrule; In my next post, this index will be used to extract the tokens that match each of the rules specified. We'll then compute the tf-idf weights for each of the terms and create a nested table suitable for mining.

    Read the article

  • Is Berkeley DB a NoSQL solution?

    - by Gregory Burd
    Berkeley DB is a library. To use it to store data you must link the library into your application. You can use most programming languages to access the API, the calls across these APIs generally mimic the Berkeley DB C-API which makes perfect sense because Berkeley DB is written in C. The inspiration for Berkeley DB was the DBM library, a part of the earliest versions of UNIX written by AT&T's Ken Thompson in 1979. DBM was a simple key/value hashtable-based storage library. In the early 1990s as BSD UNIX was transitioning from version 4.3 to 4.4 and retrofitting commercial code owned by AT&T with unencumbered code, it was the future founders of Sleepycat Software who wrote libdb (aka Berkeley DB) as the replacement for DBM. The problem it addressed was fast, reliable local key/value storage. At that time databases almost always lived on a single node, even the most sophisticated databases only had simple fail-over two node solutions. If you had a lot of data to store you would choose between the few commercial RDBMS solutions or to write your own custom solution. Berkeley DB took the headache out of the custom approach. These basic market forces inspired other DBM implementations. There was the "New DBM" (ndbm) and the "GNU DBM" (GDBM) and a few others, but the theme was the same. Even today TokyoCabinet calls itself "a modern implementation of DBM" mimicking, and improving on, something first created over thirty years ago. In the mid-1990s, DBM was the name for what you needed if you were looking for fast, reliable local storage. Fast forward to today. What's changed? Systems are connected over fast, very reliable networks. Disks are cheep, fast, and capable of storing huge amounts of data. CPUs continued to follow Moore's Law, processing power that filled a room in 1990 now fits in your pocket. PCs, servers, and other computers proliferated both in business and the personal markets. In addition to the new hardware entire markets, social systems, and new modes of interpersonal communication moved onto the web and started evolving rapidly. These changes cause a massive explosion of data and a need to analyze and understand that data. Taken together this resulted in an entirely different landscape for database storage, new solutions were needed. A number of novel solutions stepped up and eventually a category called NoSQL emerged. The new market forces inspired the CAP theorem and the heated debate of BASE vs. ACID. But in essence this was simply the market looking at what to trade off to meet these new demands. These new database systems shared many qualities in common. There were designed to address massive amounts of data, millions of requests per second, and scale out across multiple systems. The first large-scale and successful solution was Dynamo, Amazon's distributed key/value database. Dynamo essentially took the next logical step and added a twist. Dynamo was to be the database of record, it would be distributed, data would be partitioned across many nodes, and it would tolerate failure by avoiding single points of failure. Amazon did this because they recognized that the majority of the dynamic content they provided to customers visiting their web store front didn't require the services of an RDBMS. The queries were simple, key/value look-ups or simple range queries with only a few queries that required more complex joins. They set about to use relational technology only in places where it was the best solution for the task, places like accounting and order fulfillment, but not in the myriad of other situations. The success of Dynamo, and it's design, inspired the next generation of Non-SQL, distributed database solutions including Cassandra, Riak and Voldemort. The problem their designers set out to solve was, "reliability at massive scale" so the first focal point was distributed database algorithms. Underneath Dynamo there is a local transactional database; either Berkeley DB, Berkeley DB Java Edition, MySQL or an in-memory key/value data structure. Dynamo was an evolution of local key/value storage onto networks. Cassandra, Riak, and Voldemort all faced similar design decisions and one, Voldemort, choose Berkeley DB Java Edition for it's node-local storage. Riak at first was entirely in-memory, but has recently added write-once, append-only log-based on-disk storage similar type of storage as Berkeley DB except that it is based on a hash table which must reside entirely in-memory rather than a btree which can live in-memory or on disk. Berkeley DB evolved too, we added high availability (HA) and a replication manager that makes it easy to setup replica groups. Berkeley DB's replication doesn't partitioned the data, every node keeps an entire copy of the database. For consistency, there is a single node where writes are committed first - a master - then those changes are delivered to the replica nodes as log records. Applications can choose to wait until all nodes are consistent, or fire and forget allowing Berkeley DB to eventually become consistent. Berkeley DB's HA scales-out quite well for read-intensive applications and also effectively eliminates the central point of failure by allowing replica nodes to be elected (using a PAXOS algorithm) to mastership if the master should fail. This implementation covers a wide variety of use cases. MemcacheDB is a server that implements the Memcache network protocol but uses Berkeley DB for storage and HA to replicate the cache state across all the nodes in the cache group. Google Accounts, the user authentication layer for all Google properties, was until recently running Berkeley DB HA. That scaled to a globally distributed system. That said, most NoSQL solutions try to partition (shard) data across nodes in the replication group and some allow writes as well as reads at any node, Berkeley DB HA does not. So, is Berkeley DB a "NoSQL" solution? Not really, but it certainly is a component of many of the existing NoSQL solutions out there. Forgetting all the noise about how NoSQL solutions are complex distributed databases when you boil them down to a single node you still have to store the data to some form of stable local storage. DBMs solved that problem a long time ago. NoSQL has more to do with the layers on top of the DBM; the distributed, sometimes-consistent, partitioned, scale-out storage that manage key/value or document sets and generally have some form of simple HTTP/REST-style network API. Does Berkeley DB do that? Not really. Is Berkeley DB a "NoSQL" solution today? Nope, but it's the most robust solution on which to build such a system. Re-inventing the node-local data storage isn't easy. A lot of people are starting to come to appreciate the sophisticated features found in Berkeley DB, even mimic them in some cases. Could Berkeley DB grow into a NoSQL solution? Absolutely. Our key/value API could be extended over the net using any of a number of existing network protocols such as memcache or HTTP/REST. We could adapt our node-local data partitioning out over replicated nodes. We even have a nice query language and cost-based query optimizer in our BDB XML product that we could reuse were we to build out a document-based NoSQL-style product. XML and JSON are not so different that we couldn't adapt one to work with the other interchangeably. Without too much effort we could add what's missing, we could jump into this No SQL market withing a single product development cycle. Why isn't Berkeley DB already a NoSQL solution? Why aren't we working on it? Why indeed...

    Read the article

  • A Look Back at 2010 Predictions

    - by David Dorf
    Now is the time of year people make their predictions for next year, but before I start thinking about 2011 it's worth a look back to see how my predictions for 2010 fared. 1. Borders and Blockbuster bite the dust. I would have never predicted a strong brand such as Circuit City could die, but now I know it can happen to anyone. Borders has lost the battle with Barnes & Noble and Blockbuster has lost to Netflix. And just to be sure, Amazon put an extra nail in each coffin. Borders received additional investment from Bennett LeBow to keep it afloat, but the stock is down around $1.25 with no profits in sight. Blockbuster filed for bankruptcy back in September. 2. Every retailer finally has a page on Facebook... but very few figure out how to keep fans engaged. Retailer postings become noise, and fans start to unsubscribe. Twitter goes in the same direction. A few standout retailers will figure out how to use social media, and the rest will remain dumbfounded. Most retailers are on the Facebook bandwagon, and their fan bases seem to be increasing thanks to promotions like The Gap's logo redesign, Lowes' black Friday sneak peak, and Walmart's Crowd Savers. There are several examples of f-commerce advancements, including some interesting integrations from Amazon.3. Smartphones consolidate and grow. More and more people will step-up to smartphones, most of which will choose iPhone, Blackberry, and Android phones. Other smartphones will vanish, and networks will start to strain. But retailers will finally embrace mobile as the next big channel. Retail marketing departments will build mobile apps without the help of their IT department, and eventually they will get into a bind. Android has been on a tear lately stealing market share from Blackberry. Palm and Microsoft are trending down, and Apple is holding steady. Smartphone sales are up 15% and expected to continue. Retailers understand the importance of mobile, and some innovative applications have been produced this year. 4. Google helps the little guys. Google will push its Favorite Places project to help give exposure to small retailers and restaurants. They will enable small retailers to act like big ones by providing storefronts, detailed product information, and coupons for consumers. Google will find a way to bring augmented reality to the masses. I can't say I've seen much new from Google regarding Favorite Places, but they've continued to push local product search. From the PC or smartphone, consumers can search for products and see which nearby stores have it stock. Oracle Retail even productized an integration to Google to support this effort. I suppose if Google ever buys Groupon then it will bring them even closer to local shopping. Google talked about augmented humanity, but that has nothing to do with augmented reality. 5. Steve Jobs Is Bugs Bunny and Steve Ballmer is Elmer Fudd. (OK, I stole that headline from an InformationWeek article. I couldn't resist.) Both Apple and Microsoft will continue to open new stores, but only Apple will show real growth. POSReady 2009 (formerly WEPOS) will continue to share the POS market with Linux. The iPhone and iPod will continue to capture market share, but there won't be an Apple tablet. There won't be an Apple tablet? What was I thinking? While Apple has well over 300 stores, there are less than 10 Microsoft stores. Initial impressions show that even though Microsoft is locating its store near Apple Stores, they are not converting customers, with shoppers citing a lack of assortment and high prices. 6. Consolidation of e-commerce software providers. Software vendors in the areas of search, reviews, online call-centers, payments, and e-commerce will consolidate, partly driven by the success of m-commerce and SaaS. Amazon will find someone else to buy, and eBay will continue to lose momentum. Consolidation of e-commerce providers continued with IBM acquiring Sterling Commerce and CoreMetrics, and Oracle recently announcing the acquisition of ATG. Amazon grabbed Zappos, Woot, and Diapers.com to continue its dominance of online selling. While eBay's Marketplace growth may have slowed, its PayPal division is doing quite well, fueled in part by demand for mobile payments. 7. Book publishers mirror music labels. Just as the iPod brought digital downloads to the masses, the Kindle and Nook will power the e-book revolution. Books will continue to use DRM for a few more years before following the path of music. Publishers will try to preserve the margins of hardbacks by associating e-book releases with paperbacks. Amazon has done a good job providing e-reader clients for smartphones, PCs, and tablets. Competition from Barnes & Noble has forced Amazon to support book loaning, and both companies are making it easier for people to publish ebooks (with or without DRM). Progress is slow but steady. 8. NFC makes inroads, RFID treads water. Near Field Communications start to appear in mobile phones, and retailers beta test its use for payments and loyalty programs. RFID tag costs come down a bit, but not enough to spur accelerated adoption.Nokia announced plans to offer NFC-enabled phones in 2011, and rumors are swirling about NFC in the upcoming iPhone.  I think NFC is heading in the right direction, and I've heard more interest from retailers about specialized uses for RFID.9. Digital Signage goes the way of augmented reality. People use their camera phones to leave geo-tagged notes all over cities, rating stores and restaurants, and "painting" graffiti. But people get tired of holding their phones in front of their faces, so AR glasses are offered in much the same way bluetooth headsets emerged. Retailers experiement with in-store advertising using AR. Several retailers like Pizza Hut, Benetton, and Target have experimented with AR but its still somewhat of a gimmick used by marketing.  I think this prediction is a year or two too early. 10. JDA flip-flops again. After announcing their embracing of the .Net architecture, then switching to J2EE after the Manugistics acquisition, JDA will finally decide to standardize on Apple's Objective C. Everything will be ported to the iPhone and be available on the AppStore. After all, there's not much left to try. This was, of course, a joke but the sentiment is still valid.  JDA seems more supply-chain focused than retail focused, which is a an outcrop if their i2 acquisition.  Of the 10 predictions, I'm going to say I got 6 somewhat correct.  (Don't you just love grading your own paper?)  Soon I'll post my predictions for 2011 so be on the lookout.  Until then here's one more prediction:  Va Tech beats Stanford in the Orange Bowl -- count on it!

    Read the article

  • Robotic Arm &ndash; Hardware

    - by Szymon Kobalczyk
    This is first in series of articles about project I've been building  in my spare time since last Summer. Actually it all began when I was researching a topic of modeling human motion kinematics in order to create gesture recognition library for Kinect. This ties heavily into motion theory of robotic manipulators so I also glanced at some designs of robotic arms. Somehow I stumbled upon this cool looking open source robotic arm: It was featured on Thingiverse and published by user jjshortcut (Jan-Jaap). Since for some time I got hooked on toying with microcontrollers, robots and other electronics, I decided to give it a try and build it myself. In this post I will describe the hardware build of the arm and in later posts I will be writing about the software to control it. Another reason to build the arm myself was the cost factor. Even small commercial robotic arms are quite expensive – products from Lynxmotion and Dagu look great but both cost around USD $300 (actually there is one cheap arm available but it looks more like a toy to me). In comparison this design is quite cheap. It uses seven hobby grade servos and even the cheapest ones should work fine. The structure is build from a set of laser cut parts connected with few metal spacers (15mm and 47mm) and lots of M3 screws. Other than that you’d only need a microcontroller board to drive the servos. So in total it comes a lot cheaper to build it yourself than buy an of the shelf robotic arm. Oh, and if you don’t like this one there are few more robotic arm projects at Thingiverse (including one by oomlout). Laser cut parts Some time ago I’ve build another robot using laser cut parts so I knew the process already. You can grab the design files in both DXF and EPS format from Thingiverse, and there are also 3D models of each part in STL. Actually the design is split into a second project for the mini servo gripper (there is also a standard servo version available but it won’t fit this arm).  I wanted to make some small adjustments, layout, and add measurements to the parts before sending it for cutting. I’ve looked at some free 2D CAD programs, and finally did all this work using QCad 3 Beta with worked great for me (I also tried LibreCAD but it didn’t work that well). All parts are cut from 4 mm thick material. Because I was worried that acrylic is too fragile and might break, I also ordered another set cut from plywood. In the end I build it from plywood because it was easier to glue (I was told acrylic requires a special glue). Btw. I found a great laser cutter service in Kraków and highly recommend it (www.ebbox.com.pl). It cost me only USD $26 for both sets ($16 acrylic + $10 plywood). Metal parts I bought all the M3 screws and nuts at local hardware store. Make sure to look for nylon lock (nyloc) nuts for the gripper because otherwise it unscrews and comes apart quickly. I couldn’t find local store with metal spacers and had to order them online (you’d need 11 x 47mm and 3 x 15mm). I think I paid less than USD $10 for all metal parts. Servos This arm uses five standards size servos to drive the arm itself, and two micro servos are used on the gripper. Author of the project used Modelcraft RS-2 Servo and Modelcraft ES-05 HT Servo. I had two Futaba S3001 servos laying around, and ordered additional TowerPro SG-5010 standard size servos and TowerPro SG90 micro servos. However it turned out that the SG90 won’t fit in the gripper so I had to replace it with a slightly smaller E-Sky EK2-0508 micro servo. Later it also turned out that Futaba servos make some strange noise while working so I swapped one with TowerPro SG-5010 which has higher torque (8kg / cm). I’ve also bought three servo extension cables. All servos cost me USD $45. Assembly The build process is not difficult but you need to think carefully about order of assembling it. You can do the base and upper arm first. Because two servos in the base are close together you need to put first with one piece of lower arm already connected before you put the second servo. Then you connect the upper arm and finally put the second piece of lower arm to hold it together. Gripper and base require some gluing so think it through too. Make sure to look closely at all the photos on Thingiverse (also other people copies) and read additional posts on jjshortcust’s blog: My mini servo grippers and completed robotic arm  Multiply the robotic arm and electronics Here is also Rob’s copy cut from aluminum My assembled arm looks like this – I think it turned out really nice: Servo controller board The last piece of hardware I needed was an electronic board that would take command from PC and drive all seven servos. I could probably use Arduino for this task, and in fact there are several Arduino servo shields available (for example from Adafruit or Renbotics).  However one problem is that most support only up to six servos, and second that their accuracy is limited by Arduino’s timer frequency. So instead I looked for dedicated servo controller and found a series of Maestro boards from Pololu. I picked the Pololu Mini Maestro 12-Channel USB Servo Controller. It has many nice features including native USB connection, high resolution pulses (0.25µs) with no jitter, built-in speed and acceleration control, and even scripting capability. Another cool feature is that besides servo control, each channel can be configured as either general input or output. So far I’m using seven channels so I still have five available to connect some sensors (for example distance sensor mounted on gripper might be useful). And last but important factor was that they have SDK in .NET – what more I could wish for! The board itself is very small – half of the size of Tic-Tac box. I picked one for about USD $35 in this store. Perhaps another good alternative would be the Phidgets Advanced Servo 8-Motor – but it is significantly more expensive at USD $87.30. The Maestro Controller Driver and Software package includes Maestro Control Center program with lets you immediately configure the board. For each servo I first figured out their move range and set the min/max limits. I played with setting the speed an acceleration values as well. Big issue for me was that there are two servos that control position of lower arm (shoulder joint), and both have to be moved at the same time. This is where the scripting feature of Pololu board turned out very helpful. I wrote a script that synchronizes position of second servo with first one – so now I only need to move one servo and other will follow automatically. This turned out tricky because I couldn’t find simple offset mapping of the move range for each servo – I had to divide it into several sub-ranges and map each individually. The scripting language is bit assembler-like but gets the job done. And there is even a runtime debugging and stack view available. Altogether I’m very happy with the Pololu Mini Maestro Servo Controller, and with this final piece I completed the build and was able to move my arm from the Meastro Control program.   The total cost of my robotic arm was: $10 laser cut parts $10 metal parts $45 servos $35 servo controller ----------------------- $100 total So here you have all the information about the hardware. In next post I’ll start talking about the software that I wrote in Microsoft Robotics Developer Studio 4. Stay tuned!

    Read the article

  • Beyond Chatting: What ‘Social’ Means for CRM

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Steve Diamond, Senior Director, Outbound Product Management, Oracle In a recent post on this blog, my colleague Steve Boese asked three questions related to the widespread popularity and incredibly rapid growth of Facebook, Pinterest, and LinkedIn. Steve then addressed the many applications for collaborative solutions in the area of Human Capital Management. So, in turning to a conversation about Customer Relationship Management (CRM) and Sales Force Automation (SFA), let me ask you one simple question. How many sales people, particularly at business-to-business companies, consistently meet or beat their quotas in their roles by working alone, with no collaboration among fellow sales people, sales executives, employees in product groups, in service, in Legal, third-party partners, etc.? Hello? Is anybody out there? What’s that cricket noise I hear? That’s correct. Nobody! When it comes to Sales, introverts arguably have a distinct disadvantage. While it’s certainly a truism that “success” in most professional endeavors requires working with people, it’s a mandatory success factor in Sales. This fact became abundantly clear to me one early morning in the late 1990s when I joined the former Hyperion Solutions (now part of Oracle) and attended a Sales Award Ceremony. The Head of Sales at that time gave out dozens of awards – none of them to individuals and all of them to TEAMS of individuals. That’s how it works in Sales. Your colleagues help provide you with product intelligence and competitive intelligence. They help you build the best presentations, pitches, and proposals. They help you develop the most killer RFPs. They align you with the best product people to ensure you’re matching the best products for the opportunity and join you in critical meetings. They help knock the socks of your prospects in “bake off” demo’s. They bring in the best partners to either add complementary products to your opportunity or help you implement a solution. They work with you as a collective team. And so how is all this collaboration STILL typically done today? Through email. And yet we all silently or not so silently grimace about email. It’s relatively siloed. It’s painful to search. It’s difficult to align by topic. And it’s nearly impossible to re-trace meaningful and helpful conversations that occurred among a group or a team at some point in history. This is where social networking for Sales comes into play. It’s about PURPOSEFUL social networking versus chattering. What is purposeful social networking? It’s collaboration that’s built around opportunities, accounts, and contacts. It’s collaboration that delivers valuable context – on the target company, and on key competitors – just to name two examples. It’s collaboration that can scale to provide coaching for larger numbers of sales representatives, both for general purposes, and as we’ve largely discussed here, for specific ‘deals.’ And it’s collaboration that allows a team of people to collectively edit and iterate on a document like an RFP or a soon-to-be killer presentation that is maintained in a central repository, with no time wasted searching for it or worrying about version control. But lest we get carried away, let’s remember that collaboration “happens” among sales people whether there is specialized software to support it or not. The human practice of sales has not changed much in the last 80 to 90 years. Collaboration has been a mainstay during this entire time. But what social networking in general, and Oracle Social Networking in particular delivers, is the opportunity for sales teams to dramatically increase their effectiveness and efficiency – to identify and close more high quality and lucrative opportunities more quickly. For most sales organizations, this is how the game is won. To learn more please visit Oracle Social Network and Oracle Fusion Customer Relationship Management on oracle.com Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Why did Ubuntu suddenly get so slow?

    - by user101383
    12.10 has been slowing down mysteriously. Normally, in past versions, I can log in, open Firefox, and it will pop up within seconds. 12.10 is like that upon install too, though once I install my old apps, it gets very slow by Ubuntu standards. After login the hard drive will just make noise for a while before the OS will do anything. Hardware: enter description: Desktop Computer product: XPS 8300 () vendor: Dell Inc. serial: B6G2WR1 width: 64 bits capabilities: smbios-2.6 dmi-2.6 vsyscall32 configuration: boot=normal chassis=desktop uuid=44454C4C-3600-1047-8032-C2C04F575231 core description: Motherboard product: 0Y2MRG vendor: Dell Inc. physical id: 0 version: A00 serial: ..CN7360419G04VQ. slot: To Be Filled By O.E.M. *cpu description: CPU product: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz vendor: Intel Corp. physical id: 4 bus info: cpu@0 version: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz serial: To Be Filled By O.E.M. slot: CPU 1 size: 1600MHz capacity: 1600MHz width: 64 bits clock: 100MHz capabilities: x86-64 fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts dtherm tpr_shadow vnmi flexpriority ept vpid cpufreq configuration: cores=4 enabledcores=1 threads=2 *-cache:0 description: L1 cache physical id: 5 slot: L1-Cache size: 256KiB capacity: 256KiB capabilities: internal write-through unified *-cache:1 description: L2 cache physical id: 6 slot: L2-Cache size: 1MiB capacity: 1MiB capabilities: internal write-through unified *-cache:2 DISABLED description: L3 cache physical id: 7 slot: L3-Cache size: 8MiB capacity: 8MiB capabilities: internal write-back unified *-memory description: System Memory physical id: 20 slot: System board or motherboard size: 8GiB *-bank:0 description: SODIMM DDR3 Synchronous 1333 MHz (0.8 ns) product: NT2GC64B88B0NF-CG vendor: Nanya physical id: 0 serial: 7228183 slot: DIMM3 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:1 description: SODIMM DDR3 Synchronous 1333 MHz (0.8 ns) product: NT2GC64B88B0NF-CG vendor: Nanya physical id: 1 serial: 1E28183 slot: DIMM1 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:2 description: SODIMM DDR3 Synchronous 1333 MHz (0.8 ns) product: NT2GC64B88B0NF-CG vendor: Nanya physical id: 2 serial: 9E28183 slot: DIMM4 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-bank:3 description: SODIMM DDR3 Synchronous 1333 MHz (0.8 ns) product: NT2GC64B88B0NF-CG vendor: Nanya physical id: 3 serial: 5527183 slot: DIMM2 size: 2GiB width: 64 bits clock: 1333MHz (0.8ns) *-firmware description: BIOS vendor: Dell Inc. physical id: 0 version: A05 date: 09/21/2011 size: 64KiB capacity: 4032KiB capabilities: mca pci upgrade shadowing escd cdboot bootselect socketedrom edd int13floppy1200 int13floppy720 int13floppy2880 int5printscreen int9keyboard int14serial int17printer int10video acpi usb zipboot biosbootspecification *-pci description: Host bridge product: 2nd Generation Core Processor Family DRAM Controller vendor: Intel Corporation physical id: 100 bus info: pci@0000:00:00.0 version: 09 width: 32 bits clock: 33MHz *-pci:0 description: PCI bridge product: Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port vendor: Intel Corporation physical id: 1 bus info: pci@0000:00:01.0 version: 09 width: 32 bits clock: 33MHz capabilities: pci pm msi pciexpress normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:40 ioport:e000(size=4096) memory:fe600000-fe6fffff ioport:d0000000(size=268435456) *-display description: VGA compatible controller product: Juniper [Radeon HD 5700 Series] vendor: Advanced Micro Devices [AMD] nee ATI physical id: 0 bus info: pci@0000:01:00.0 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi vga_controller bus_master cap_list rom configuration: driver=radeon latency=0 resources: irq:44 memory:d0000000-dfffffff memory:fe620000-fe63ffff ioport:e000(size=256) memory:fe600000-fe61ffff *-multimedia description: Audio device product: Juniper HDMI Audio [Radeon HD 5700 Series] vendor: Advanced Micro Devices [AMD] nee ATI physical id: 0.1 bus info: pci@0000:01:00.1 version: 00 width: 64 bits clock: 33MHz capabilities: pm pciexpress msi bus_master cap_list configuration: driver=snd_hda_intel latency=0 resources: irq:48 memory:fe640000-fe643fff *-communication description: Communication controller product: 6 Series/C200 Series Chipset Family MEI Controller #1 vendor: Intel Corporation physical id: 16 bus info: pci@0000:00:16.0 version: 04 width: 64 bits clock: 33MHz capabilities: pm msi bus_master cap_list configuration: driver=mei latency=0 resources: irq:45 memory:fe708000-fe70800f *-usb:0 description: USB controller product: 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 vendor: Intel Corporation physical id: 1a bus info: pci@0000:00:1a.0 version: 05 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:16 memory:fe707000-fe7073ff *-multimedia description: Audio device product: 6 Series/C200 Series Chipset Family High Definition Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 05 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=snd_hda_intel latency=0 resources: irq:46 memory:fe700000-fe703fff *-pci:1 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 1 vendor: Intel Corporation physical id: 1c bus info: pci@0000:00:1c.0 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:41 memory:fe500000-fe5fffff *-network description: Network controller product: BCM4313 802.11b/g/n Wireless LAN Controller vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:02:00.0 version: 01 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=bcma-pci-bridge latency=0 resources: irq:16 memory:fe500000-fe503fff *-pci:2 description: PCI bridge product: 6 Series/C200 Series Chipset Family PCI Express Root Port 4 vendor: Intel Corporation physical id: 1c.3 bus info: pci@0000:00:1c.3 version: b5 width: 32 bits clock: 33MHz capabilities: pci pciexpress msi pm normal_decode bus_master cap_list configuration: driver=pcieport resources: irq:42 memory:fe400000-fe4fffff *-network description: Ethernet interface product: NetLink BCM57788 Gigabit Ethernet PCIe vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: eth0 version: 01 serial: 18:03:73:e1:a7:71 size: 100Mbit/s capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd 1000bt 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.123 duplex=full firmware=sb ip=192.168.1.3 latency=0 link=yes multicast=yes port=MII speed=100Mbit/s resources: irq:47 memory:fe400000-fe40ffff *-usb:1 description: USB controller product: 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 vendor: Intel Corporation physical id: 1d bus info: pci@0000:00:1d.0 version: 05 width: 32 bits clock: 33MHz capabilities: pm debug ehci bus_master cap_list configuration: driver=ehci_hcd latency=0 resources: irq:23 memory:fe706000-fe7063ff *-isa description: ISA bridge product: H67 Express Chipset Family LPC Controller vendor: Intel Corporation physical id: 1f bus info: pci@0000:00:1f.0 version: 05 width: 32 bits clock: 33MHz capabilities: isa bus_master cap_list configuration: latency=0 *-storage description: SATA controller product: 6 Series/C200 Series Chipset Family SATA AHCI Controller vendor: Intel Corporation physical id: 1f.2 bus info: pci@0000:00:1f.2 version: 05 width: 32 bits clock: 66MHz capabilities: storage msi pm ahci_1.0 bus_master cap_list configuration: driver=ahci latency=0 resources: irq:43 ioport:f070(size=8) ioport:f060(size=4) ioport:f050(size=8) ioport:f040(size=4) ioport:f020(size=32) memory:fe705000-fe7057ff *-serial UNCLAIMED description: SMBus product: 6 Series/C200 Series Chipset Family SMBus Controller vendor: Intel Corporation physical id: 1f.3 bus info: pci@0000:00:1f.3 version: 05 width: 64 bits clock: 33MHz configuration: latency=0 resources: memory:fe704000-fe7040ff ioport:f000(size=32) *-scsi:0 physical id: 1 logical name: scsi0 capabilities: emulated *-disk description: ATA Disk product: Hitachi HUA72201 vendor: Hitachi physical id: 0.0.0 bus info: scsi@0:0.0.0 logical name: /dev/sda version: JP4O serial: JPW9J0HD21BTZC size: 931GiB (1TB) capabilities: partitioned partitioned:dos configuration: ansiversion=5 sectorsize=512 signature=000641dc *-volume:0 description: EXT4 volume vendor: Linux physical id: 1 bus info: scsi@0:0.0.0,1 logical name: /dev/sda1 logical name: / version: 1.0 serial: 4e3d91b7-fd38-4f44-a9e9-ba3c39b926ec size: 585GiB capacity: 585GiB capabilities: primary journaled extended_attributes large_files huge_files dir_nlink recover extents ext4 ext2 initialized configuration: created=2012-10-21 16:26:50 filesystem=ext4 lastmountpoint=/ modified=2012-10-29 18:12:08 mount.fstype=ext4 mount.options=rw,relatime,errors=remount-ro,data=ordered mounted=2012-10-29 18:12:08 state=mounted *-volume:1 description: Extended partition physical id: 2 bus info: scsi@0:0.0.0,2 logical name: /dev/sda2 size: 7823MiB capacity: 7823MiB capabilities: primary extended partitioned partitioned:extended *-logicalvolume description: Linux swap / Solaris partition physical id: 5 logical name: /dev/sda5 capacity: 7823MiB capabilities: nofs *-volume:2 description: Windows NTFS volume physical id: 3 bus info: scsi@0:0.0.0,3 logical name: /dev/sda3 version: 3.1 serial: 84a92aae-347b-7940-a2d1-f4745b885ef2 size: 337GiB capacity: 337GiB capabilities: primary bootable ntfs initialized configuration: clustersize=4096 created=2012-10-21 18:43:39 filesystem=ntfs modified_by_chkdsk=true mounted_on_nt4=true resize_log_file=true state=dirty upgrade_on_mount=true *-scsi:1 physical id: 2 logical name: scsi1 capabilities: emulated *-cdrom description: DVD-RAM writer product: DVDRWBD DH-12E3S vendor: PLDS physical id: 0.0.0 bus info: scsi@1:0.0.0 logical name: /dev/cdrom logical name: /dev/cdrw logical name: /dev/dvd logical name: /dev/dvdrw logical name: /dev/sr0 version: MD11 capabilities: removable audio cd-r cd-rw dvd dvd-r dvd-ram configuration: ansiversion=5 status=nodisc *-scsi:2 physical id: 3 bus info: usb@2:1.8 logical name: scsi6 capabilities: emulated scsi-host configuration: driver=usb-storage *-disk:0 description: SCSI Disk physical id: 0.0.0 bus info: scsi@6:0.0.0 logical name: /dev/sdb configuration: sectorsize=512 *-disk:1 description: SCSI Disk physical id: 0.0.1 bus info: scsi@6:0.0.1 logical name: /dev/sdc configuration: sectorsize=512 *-disk:2 description: SCSI Disk physical id: 0.0.2 bus info: scsi@6:0.0.2 logical name: /dev/sdd configuration: sectorsize=512 *-disk:3 description: SCSI Disk product: MS/MS-Pro vendor: Generic- physical id: 0.0.3 bus info: scsi@6:0.0.3 logical name: /dev/sde version: 1.03 serial: 3 capabilities: removable configuration: sectorsize=512 *-medium physical id: 0 logical name: /dev/sde

    Read the article

  • Master Data

    - by david.butler(at)oracle.com
    Let's take a deeper look at what we mean when we talk about 'Master' data. In its most general sense, master data is data that exists in more than one operational application. These are the applications that automate business processes. These applications require significant amounts of data to function correctly.  This includes data about the objects that are involved in transactions, as well as the transaction data itself.  For example, when a customer buys a product, the transaction is managed by a sales application.  The objects of the transaction are the Customer and the Product.  The transactional data is the time, place, price, discount, payment methods, etc. used at the point of sale. Many thousands of transactional data attributes are needed within the application. These important data elements are local to the applications and have no bearing on other applications. Harmonization and synchronization across applications is not necessary. The Customer and Product objects of the transaction also have a large number of attributes. Customer for example, includes hierarchies, hierarchical and matrixed relationships, contacts, classifications, preferences, accounts, identifiers, profiles, and addresses galore for 'ship to', 'mail to'; 'service at'; etc. Dozens of attributes exist for individuals, hundreds for organizations, and thousands for products. This data has meaning beyond any particular application. It exists in many applications and drives the vital cross application enterprise business processes. These are the processes that define and differentiate the organization. At every decision point, information about the objects of the process determines the direction of the process flow. This is the nature of the data that exists in more than one application, and this is why we call it 'master data'. Let me elaborate. Parties Oracle has developed a party schema to model all participants in your daily business operations. It models people, organizations, groups, customers, contacts, employees, and suppliers. It models their accounts, locations, classifications, and preferences.  And most importantly, it models the vast array of hierarchical and matrixed relationships that exist between all the participants in your real world operations.  The model logically separates people and organizations from their relationships and accounts.  This separation creates flexibility unmatched in the industry and accounts for the fact that the Oracle schema for Customers, Suppliers, and Accounts is a true superset of the wide variety of commercial and homegrown customer models in existence. Sites Sites are places where business is conducted. They can be addresses, clusters such as retail malls, locations within a cluster, floors within a building, places where meters are located, rooms on floors, etc.  Fully understanding all attributes of a site is key to many business processes. Attributes such as 'noise abatement policy' at a point of delivery, or the size of an oven in a business kitchen drive day-to-day activities such as delivery schedules or food promotions. Typically this kind of data is siloed in departments and scattered across applications and spreadsheets.  This leads to conflicting information and poor operational efficiencies. Oracle's Global Single Schema can hold all site attributes in one place and enables a single version of authoritative site information across the enterprise. Products and Services The Oracle Global Single Schema also includes a number of entities that define the products and services a company creates and offers for sale. Key entities include Items organized into Catalogs and Price Lists. The Catalog structures provide for the ability to capture different views of a product such as engineering, manufacturing, and service which are based on a unified product model. As a result, designers, manufacturing engineers, purchasers and partners can work simultaneously on a common product definition. The Catalog schema allows for unlimited attributes, combines them into meaningful groups, and maps them to catalog categories to track these different types of information. The model also maps an unlimited number of functional structures for each item. For example, multiple Bills of Material (BOMs) can be constructed representing requirements BOM, features BOM, and packaging BOM for an item. The Catalog model also supports hierarchical information about each item and all standard Global Data Synchronization attributes. Business Processes Utilizing Linked Data Entities Each business entity codified into a centralized master data environment significantly improves the efficiency of the automated business processes that use the consolidated data.  When all the key business entities used by an organization's process are so consolidated, the advantages are multiplied.  The primary reason for business process breakdowns (i.e. data errors across application boundaries) is eliminated. All processes are positively impacted and business process automation is itself automated.  I like to use the "Call to Resolution" business process as an example to help illustrate this important point. It involves call center applications, service applications, RMA applications, transportation applications, inventory applications, etc. Customer, Site, Product and Supplier master data must all be correct and consistent across these applications.  What's more, the data relationships between customer and product, and product and suppliers must be right. This is the minimum quality needed to insure the business process flows without error. But that is not the end of the story. Critical master data attributes such as customer loyalty, profitability, credit worthiness, and propensity to buy can optimize the call center point of contact component of the process. Critical product information such as alternative parts or equivalent products can optimize the resolution selected by the process. A comprehensive understanding of the 'service at' location can help insure multiple trips are avoided in the process. Full supplier information on reliability, delivery delays, and potential alternates can prevent supplier exceptions and play a significant role in optimizing the process.  In other words, these master data attributes enable the optimization of the "Call to Resolution" enterprise business process. Master data supports and guides business process flows. Thus the phrase 'Master Data' is indeed appropriate. MDM is the software that houses, manages, and governs the master data that resides in all applications and controls the enterprise business processes. A complete master data solution takes a data model that holds fully attributed master data entities and their inter-relationships. Oracle has this model. Oracle, with its deep understanding of application data is the logical choice for managing all your master data within the enterprise whether or not your organization actually runs any Oracle Applications.

    Read the article

  • Not attending the LUGM mini-meetup - 05. Oct 2013

    Not attending a meeting of the LUGM can be fun, too. It's getting a bit of a habit that Ish is organising small gatherings, aka mini-meetups, of the Linux User Group Mauritius/Meta (LUGM) almost every Saturday. There they mainly discuss and talk about various elements of using Linux as ones main operating systems and the possibilities you are going to have. On top of course, some tips & tricks about mastering the command line and initial steps in scripting or even writing HTML. In general, sounds like a good portion of fun and great spirit of community. Unfortunately, I'm usually quite busy with private and family matters during the weekend and so I already signalised that I wouldn't be around. Well, at least not physically... But this Saturday a couple of things worked out faster than expected and so I was hanging out on my machine. I made virtual contact with one of Pawan's messages over on Facebook... And somehow that kicked off some kind of an online game fun on basic configuration of Apache HTTPd 2.2.x, PHP 5.x and how to improve the overall performance of a newly installed blog based on WordPress. Default configuration files Nitin's website finally came alive and despite the dark theme and the hidden Apple 'fanboy' advertisement I was more interested in the technical situation. As with any new installation there is usually quite some adjustment to be done. And Nitin's page was no exception. Unfortunately, out of the box installations of Apache httpd and PHP are too verbose and expose too much information under the hood. You might think that this isn't really a problem at all, well, think about it again after completely reading this article. First, I checked the HTTP response headers - using either Chrome Developer Tools or Firefox Web Developer extension - of Nitin's page and based on that I advised him to lower the noise levels a little bit. It's not really necessary that detailed information about web server software and scripting language has to be published in every response made. Quite a number of script kiddies and exploits actually check for version specifics prior to an attack. So, removing at least version details hardens the system a little bit. In particular, I'm talking about these response values: Server X-Powered-By How to achieve that? By tweaking the configuration files... Namely, we are going to look into the following ones: apache2.conf httpd.conf .htaccess php.ini The above list contains some additional files, I'm talking about in the next paragraphs. Anyway, those are the ones involved. Tweaking Apache Open your favourite text editor and start to modify the apache2.conf. Eventually, you might like to have a quick peak at the file to see whether it is necessary to adjust it or not. Following is a handy combination of commands to get an overview of your active directives: # sudo grep -v '#' /etc/apache2/apache2.conf | grep -v '^$' | less There you keep an eye on those two Apache directives: ServerSignature Off ServerTokens Prod If that's not the case, change them as highlighted above. In order to activate your modifications you have to restart Apache httpd server. On Debian and Ubuntu you might use apache2ctl for that, on other distributions you might have to use service or run the init-scripts again: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Refresh your website and check the HTTP response header. Tweaking PHP5 (a little bit) Next, check your php.ini file with the following statement: # sudo grep -v ';' /etc/php5/apache2/php.ini | grep -v '^$' | less And check the value of expose_php = Off Again, if it's not as highlighted, change it... Some more Apache love Okay, back to Apache it might also be interesting to improve the situation about browser caching and removing more obsolete information. When you run your website against the usual performance checks like Google Page Speed and Yahoo YSlow you might see those check points with bad grades on a standard, default configuration. Well, this can be done easily. Configure entity tags (ETags) ETags are only interesting when you run your websites on a farm of multiple web servers. Removing this data for your static resources is very simple in Apache. As we are going to deal with the HTTP response header information you have to ensure that Apache is capable to manipulate them. First, check your enabled modules: # sudo ls -al /etc/apache2/mods-enabled/ | grep headers And in case that the 'headers' module is not listed, you have to enable it from the available ones: # sudo a2enmod headers Second, check your httpd.conf file (in case it exists): # sudo grep -v '#' /etc/apache2/httpd.conf | grep -v '^$' | less In newer (better said fresh) installations you might have to create a new configuration file below your conf.d folder with your favourite text editor like so: # sudo nano /etc/apache2/conf.d/headers.conf Then, in order to tweak your HTTP responses either check for those lines or add them: Header unset ETagFileETag None In case that your file doesn't exist or those lines are missing, feel free to create/add them. Afterwards, check your Apache configuration syntax and restart your running instances as already shown above: # sudo apache2ctl configtestSyntax OK# sudo apache2ctl restart Add Expires headers To improve the loading performance of your website, you should take some care into the proper configuration of how to leverage the browser's ability to cache certain resources and files. This is done by adding an Expires: value to the HTTP response header. Generally speaking it is advised that you specify a near-future, read: 1 week or a little bit more, for your static content like JavaScript files or Cascading Style Sheets. One solution to adjust this is to put some instructions into the .htaccess file in the root folder of your web site. Of course, this could also be placed into a more generic location of your Apache installation but honestly, I'd like to keep this at the web site level. Following some adjustments I'm currently using on this blog site: # Turn on Expires and set default to 0ExpiresActive OnExpiresDefault A0 # Set up caching on media files for 1 year (forever?)<FilesMatch "\.(flv|ico|pdf|avi|mov|ppt|doc|mp3|wmv|wav)$">ExpiresDefault A29030400Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 1 week<FilesMatch "\.(js|css)$">ExpiresDefault A604800Header append Cache-Control "public"</FilesMatch> # Set up caching on media files for 31 days<FilesMatch "\.(gif|jpg|jpeg|png|swf)$">ExpiresDefault A2678400Header append Cache-Control "public"</FilesMatch> As we are editing the .htaccess files, it is not necessary to restart Apache. In case that your web site doesn't load anymore or you're experiencing an error while trying to restart your httpd, check that the 'expires' module is actually an enabled module: # ls -al /etc/apache2/mods-enabled/ | grep expires# sudo a2enmod expires Of course, the instructions above a re not feature complete but I hope that they might provide a better default configuration for your LAMP stack. Resume of the day Within a couple of hours, and while being occupied with an eLearning course on SQL Server 2012, I had some good fun in helping and assisting other LUGM members while they were some kilometers away at Bagatelle. According to other blog articles it seems that Nitin had quite some moments of desperation. Just for the records: At no time it was my intention to either kick his butt or pull a leg on him. Simply, providing some input based on the lessons I've learned over the last couple of years configuring Apache HTTPd and PHP. Check out the other blogs, too: LUGM mini-meetup... Epic! Superb Saturday Linux Meetup And last but not least, the man himself: The end of a new beginning Cheers, and happy community'ing! Updates Due to our weekly Code & Coffee sessions in the MSCC community, I had a chance to talk to Nitin directly and he showed me the problems directly on his machine. This led to update this article hence the paragraphs on enabling the modules 'headers' and 'expires'.

    Read the article

  • Conducting Effective Web Meetings

    - by BuckWoody
    There are several forms of corporate communication. From immediate, rich communications like phones and IM messaging to historical transactions like e-mail, there are a lot of ways to get information to one or more people. From time to time, it's even useful to have a meeting. (This is where a witty picture of a guy sleeping in a meeting goes. I won't bother actually putting one here; you're already envisioning it in your mind) Most meetings are pointless, and a complete waste of time. This is the fault, completely and solely, of the organizer. It's because he or she hasn't thought things through enough to think about alternate forms of information passing. Here's the criteria for a good meeting - whether in-person or over the web: 100% of the content of a meeting should require the participation of 100% of the attendees for 100% of the time It doesn't get any simpler than that. If it doesn't meet that criteria, then don't invite that person to that meeting. If you're just conveying information and no one has the need for immediate interaction with that information (like telling you something that modifies the message), then send an e-mail. If you're a manager, and you need to get status from lots of people, pick up the phone.If you need a quick answer, use IM. I once had a high-level manager that called frequent meetings. His real need was status updates on various processes, so 50 of us would sit in a room while he asked each one of us questions. He believed this larger meeting helped us "cross pollinate ideas". In fact, it was a complete waste of time for most everyone, except in the one or two moments that they interacted with him. So I wrote some code for a Palm Pilot (which was a kind of SmartPhone but with no phone and no real graphics, but this was in the days when we had just discovered fire and the wheel, although the order of those things is still in debate) that took an average of the salaries of the people in the room (I guessed at it) and ran a timer which multiplied the number of people against the salaries. I left that running in plain sight for him, and when he asked about it, I explained how much the meetings were really costing the company. We had far fewer meetings after. Meetings are now web-enabled. I believe that's largely a good thing, since it saves on travel time and allows more people to participate, but I think the rule above still holds. And in fact, there are some other rules that you should follow to have a great meeting - and fewer of them. Be Clear About the Goal This is important in any meeting, but all of us have probably gotten an invite with a web link and an ambiguous title. Then you get to the meeting, and it's a 500-level deep-dive on something everyone expects you to know. This is unfair to the "expert" and to the participants. I always tell people that invite me to a meeting that I will be as detailed as I can - but the more detail they can tell me about the questions, the more detailed I can be in my responses. Granted, there are times when you don't know what you don't know, but the more you can say about the topic the better. There's another point here - and it's that you should have a clearly defined "win" for the meeting. When the meeting is over, and everyone goes back to work, what were you expecting them to do with the information? Have that clearly defined in your head, and in the meeting invite. Understand the Technology There are several web-meeting clients out there. I use them all, since I meet with clients all over the world. They all work differently - so I take a few moments and read up on the different clients and find out how I can use the tools properly. I do this with the technology I use for everything else, and it's important to understand it if the meeting is to be a success. If you're running the meeting, know the tools. I don't care if you like the tools or not, learn them anyway. Don't waste everyone else's time just because you're too bitter/snarky/lazy to spend a few minutes reading. Check your phone or mic. Check your video size. Install (and learn to use)  ZoomIT (http://technet.microsoft.com/en-us/sysinternals/bb897434.aspx). Format your slides or screen or output correctly. Learn to use the voting features of the meeting software, and especially it's whiteboard features. Figure out how multiple monitors work. Try a quick meeting with someone to test all this. Do this *before* you invite lots of other people to your meeting.   Use a WebCam I'm not a pretty man. I have a face fit for radio. But after attending a meeting with clients where one Microsoft person used a webcam and another did not, I'm convinced that people pay more attention when a face is involved. There are tons of studies around this, or you can take my word for it, but toss a shirt on over those pajamas and turn the webcam on. Set Up Early Whether you're attending or leading the meeting, don't wait to sign on to the meeting at the time when it starts. I can almost plan that a 10:00 meeting will actually start at 10:10 because the participants/leader is just now installing the web client for the meeting at 10:00. Sign on early, go on mute, and then wait for everyone to arrive. Mute When Not Talking No one wants to hear your screaming offspring / yappy dog / other cubicle conversations / car wind noise (are you driving in a desert storm or something?) while the person leading the meeting is trying to talk. I use the Lync software from Microsoft for my meetings, and I mute everyone by default, and then tell them to un-mute to talk to the group. Share Collateral If you have a PowerPoint deck, mail it out in case you have a tech failure. If you have a document, share it as an attachment to the meeting. Don't make people ask you for the information - that's why you're there to begin with. Even better, send it out early. "But", you say, "then no one will come to the meeting if they have the deck first!" Uhm, then don't have a meeting. Send out the deck and a quick e-mail and let everyone get on with their productive day. Set Actions At the Meeting A meeting should have some sort of outcome (see point one). That means there are actions to take, a follow up, or some deliverable. Otherwise, it's an e-mail. At the meeting, decide who will do what, when things are needed, and so on. And avoid, if at all possible, setting up another meeting, unless absolutely necessary. So there you have it. Whether it's on-premises or on the web, meetings are a necessary evil, and should be treated that way. Like politicians, you should have as few of them as are necessary to keep the roads paved and public libraries open.

    Read the article

< Previous Page | 26 27 28 29 30 31 32  | Next Page >