Search Results

Search found 3315 results on 133 pages for 'magic packet'.

Page 76/133 | < Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >

  • How Do Sockets Work in C?

    - by kaybenleroll
    I am a bit confused about socket programming in C. You create a socket, bind it to an interface and an IP address and get it to listen. I found a couple of web resources on that, and understood it fine. In particular, I found an article Network programming under Unix systems to be very informative. What confuses me is the timing of data arriving on the socket. How can you tell when packets arrive, and how big the packet is, do you have to do all the heavy lifting yourself? My basic assumption here is that packets can be of variable length, so once binary data starts appearing down the socket, how do you begin to construct packets from that?

    Read the article

  • how to customize the filter when following a stream in wireshark?

    - by jim
    when selecting a packet and choosing to follow the stream, wireshark automatically sets a filter that looks something like this: (ip.addr eq 10.2.3.8 and ip.addr eq 10.2.255.255) and (udp.port eq 999 and udp.port eq 899). i'd like to be able to set that myself when following the stream, but have not been able to identify where to do that. setting the display filter has no effect. in fact, after following the stream, whatever display filter is currently set will be replaced by the follow stream formatted filter. is customizing the follow stream filter even possible? thanks

    Read the article

  • Simple performance testing tool in C#?

    - by Tomas
    Hi, At first -I need to do it as my university project so I am not interested in using existing tools. I would like to know whether it is even possible to write a very simple tool that I could use for performance testing of web applications. It would only record actions (I do not know, maybe just packet sniffering?) and then replay. However I have basic idea (record packets on port 80 and sending them again), I do not know how to measure time for each transaction as they are not differentiated. Any help is greatly appreciated, thank you!

    Read the article

  • How to split and join array in C++?

    - by Richard Knop
    I have a byte array like this: lzo_bytep out; // my byte array size_t uncompressedImageSize = 921600; out = (lzo_bytep) malloc((uncompressedImageSize + uncompressedImageSize / 16 + 64 + 3)); wrkmem = (lzo_voidp) malloc(LZO1X_1_MEM_COMPRESS); // Now the byte array has 802270 bytes r = lzo1x_1_compress(imageData, uncompressedImageSize, out, &out_len, wrkmem); How can I split it into smaller parts under 65,535 bytes (the byte array is one large packet which I want to sent over UDP which has upper limit 65,535 bytes) and then join those small chunks back into a continuous array?

    Read the article

  • JSONP parsing error from WCF

    - by user1754730
    Answered my own question I had a problem with a jquery (jsonp) call to a WCF service that was throwing a json parsing error. Using ASP.NET 4.0 on the WCF side and jquery 1.7 on the client side. Turned out there was an old set of script tags on the page using language = VBSCRIPT. The browser was interpreting the returned json packet of script as "VBscript" instead of javascript. I placed a set of empty javascript tags at the top of the page and the browser is now interpreting the json as the proper javascript function. Hope this helps someone else. Tom

    Read the article

  • Mysql Connection Error from 1.1.1 to 1.2.1

    - by Chromag
    I upgraded from 1.1.1 to 1.2.1 and I seem to be getting the following exception when it attempts to connect to MySQL: The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server. at com.mysql.jdbc.Util.handleNewInstance(Util.java:407) at com.mysql.jdbc.SQLError.createCommunicationsException(SQLError.java:1116) at com.mysql.jdbc.MysqlIO.<init>(MysqlIO.java:343) ... Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) I've confirmed that MySQL is indeed running and seems to be working fine. The following is the line from my application.conf file (with user/pass/db replaced): db=mysql:username:password@databasename I also tried using the full JDBC configuration. Did I miss something? This worked just fine in 1.1.1. I'm running MySQL 5.1.41. Thanks.

    Read the article

  • 16 millisecond quantization when sending/receivingtcp packets

    - by MKZ
    Hi, I have a C++ application running on windows xp 32 system sending and receiving short tcp/ip packets. Measuring (accurately) the arrival time I see a quantization of the arrival time to 16 millisecond time units. (Meaning all packets arriving are at (16 )xN milliseconds separated from each other) To avoid packet aggregation I tried to disable the NAGLE algorithm by setting the IPPROTO_TCP option to TCP_NODELAY in the socket variables but it did not help I suspect that the problem is related to the windows schedular which also have a 16 millisecond clock.. Any idea of a solution to this problem ? Thanks

    Read the article

  • bandwidth throttling C linux

    - by bob moch
    hi im currently creating a function to create a sleep time i can pause between packets for my port scanner im creating for personal/educational use for my home network. what im currently doing is opening /proc/net/dev and reading the 9th set of digits for the eth0 interface to find out the current packets being set and then reading it again and doing some math to figure out a delay to sleep between sending a packet to a port to identify it and fingerprint it. my problem is that no matter what throttle % i use it always seems to send the same rate of packets. i think its mainly my way of mathematically creating my sleep delay. edit:: dont mind the function declaration and the struct stuff all im doing is spawning this function in a thread and passing a pointer to a struct to the function, recreating the struct locally and then freeing the passed structs memory. void *bandwidthmonitor_cmd(void *param) { char cmdline[1024], *bytedata[19]; int i = 0, ii = 0; long long prevbytes = 0, currentbytes = 0, elapsedbytes = 0, byteusage = 0, maxthrottle = 0; command_struct bandwidth = *((command_struct *)param); free(param); //printf("speed: %d\n throttle: %d\n\n", UPLOAD_SPEED, bandwidth.throttle); maxthrottle = UPLOAD_SPEED * bandwidth.throttle / 100; //printf("max throttle:%lld\n", maxthrottle); FILE *f = fopen("/proc/net/dev", "r"); if(f != NULL) { while(1) { while(fgets(cmdline, sizeof(cmdline), f) != NULL) { cmdline[strlen(cmdline)] = '\0'; if(strncmp(cmdline, " eth0", 6) == 0) { bytedata[0] = strtok(cmdline, " "); while(bytedata[i] != NULL) { i++; bytedata[i] = strtok(NULL, " "); } bytedata[i + 1] = '\0'; currentbytes = atoi(bytedata[9]); } } i = 0; rewind(f); elapsedbytes = currentbytes - prevbytes; prevbytes = currentbytes; byteusage = 8 * (elapsedbytes / 1024); //printf("usage:%lld\n",byteusage); if(ii & 0x40) { SLEEP += (maxthrottle - byteusage) * -1.1;//-2.5; if(SLEEP < 0){ SLEEP = 0; } //printf("sleep:%d\n", SLEEP); } usleep(25000); ii++; } } return NULL; } SLEEP and UPLOAD_SPEED are global variables and UPLOAD_SPEED is in kb/s and generated via a speedtest function that gets the upload speed of my computer. this function is running inside a POSIX thread updating SLEEP which my threads doing the socket work grab to sleep by after every packet. as testing instead of only doing the ports i want to check i make it do all the ports over and over again so i can run dstat on a machine to check bandwidth and no matter what bandwidth.throttle is set to it always seems to generate the same amount of bandwidth to the dstat machine. the way i calculate how much i "should" throttle by is by finding the maximum throttle speed which is defined as maxthrottle = upload_speed * throttle / 100; for example if my upload speed was 1000kb/s and my throttle was 90 (90%) my max throttle would be 900kb/s from there it would find the current bytes sent from /proc/net/dev and then find my sleep time via incrementing or decrementing it via sleep += (maxthrottle - bytesysed) * -1.1; this should in theory increase or decrease the sleep time based on how many bytes used there are. the if(ii & 0x40) statement is just for some moderation control. it makes it so it only sets sleep to a new time every 30-40 iterations. final notes: the main problem is that the sleep timer does not seem to modify the speed of packets being set. or maybe its just my implementation because on a freshly restarted machine where /proc/net/dev has low numbers of bytes sent it seems to raise the sleep timer accordingly on my 60kb/s upload machine (ex if i set the throttle to 2 it will incline the sleep timer until network bandwidth out reaches the max bandwidth threshold, but when i try running it on a server which as been online forever it doesnt seem to work as nicely if at all. if anyone can suggest a new method of monitoring the network to adjust a sleep delay then let me know or if anyone sees a flaw in my code. thank you.

    Read the article

  • How can I handle this kind of exceptions (in Doctrine)

    - by ppavlovic
    Can you tell me how can I handle this kind of exceptions: Fatal error: Uncaught exception 'Doctrine_Connection_Exception' with message 'PDO Connection Error: SQLSTATE[HY000] [2013] Lost connection to MySQL server at 'reading initial communication packet', system error: 110' in ... It happens when connection with MySQL is lost during query. I need to handle this exception so I can show 500 error page so the crawlers do not cache page, and to redirect user to appropriate "Try again" page. P.S. I have a lot's of code, so I can not go trough all code to put try/catch block. I need something simple and yet effective.

    Read the article

  • PHP] How can I connect to MySQL on wamp server?

    - by user294359
    This might be ridiculously easy for you.. but I've been struggling with this for an hour.. :( <?php $connect = mysql_connect("localhost:8080", "root", "mypassword"); echo($connect);?> This is the code that I'm trying to run - you can see that I'm using 8080 as my port number and, of course, I have html codes as well. However, it gives me the following error msgs whenever I try to open the php file. ==================================================================================== Warning: mysql_connect() [function.mysql-connect]: MySQL server has gone away in C:\wamp\www\php_sandbox\index.php on line 2 Warning: mysql_connect() [function.mysql-connect]: Error while reading greeting packet. PID=4932 in C:\wamp\www\php_sandbox\index.php on line 2 Warning: mysql_connect() [function.mysql-connect]: MySQL server has gone away in C:\wamp\www\php_sandbox\index.php on line 2 ===================================================================================== I don't know... what's wrong with this. Is it because of the port number?

    Read the article

  • Ubuntu 12.04 on Amazon EC2: /dev/xvda1 will be checked for errors at next reboot?

    - by cwd
    I'm running the lastest Ubuntu 12.04 AMI (ami-a29943cb) from Canonical on Amazon EC2 and quite often when I log in I get the message: *** /dev/xvda1 will be checked for errors at next reboot *** I have read a bunch of documentation on this and seem to understand that every so many reboots (around 37 see Mount count / Maximum mount count below) Ubuntu wants to check a disk for errors. I can see that by using dumpe2fs -h /dev/xvda1 (reference) to get information such as: Last mounted on: / Filesystem UUID: 1ad27d06-4ecf-493d-bb19-4710c3caf924 Filesystem magic number: 0xEF53 Filesystem revision #: 1 (dynamic) Filesystem features: has_journal ext_attr resize_inode dir_index filetype needs_recovery extent flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize Filesystem flags: signed_directory_hash Default mount options: (none) Filesystem state: clean Errors behavior: Continue Filesystem OS type: Linux Inode count: 524288 Block count: 2097152 Reserved block count: 104857 Free blocks: 1778055 Free inodes: 482659 First block: 0 Block size: 4096 Fragment size: 4096 Reserved GDT blocks: 511 Blocks per group: 32768 Fragments per group: 32768 Inodes per group: 8192 Inode blocks per group: 512 Flex block group size: 16 Filesystem created: Tue Apr 24 03:07:48 2012 Last mount time: Thu Nov 8 03:17:58 2012 Last write time: Tue Apr 24 03:08:52 2012 Mount count: 3 Maximum mount count: 37 Last checked: Tue Apr 24 03:07:48 2012 Check interval: 15552000 (6 months) Next check after: Sun Oct 21 03:07:48 2012 Lifetime writes: 2454 MB Reserved blocks uid: 0 (user root) Reserved blocks gid: 0 (group root) First inode: 11 Inode size: 256 Required extra isize: 28 Desired extra isize: 28 Journal inode: 8 Default directory hash: half_md4 Directory Hash Seed: 0a25e04c-6169-4d68-bfa6-a1acd8e39632 Journal backup: inode blocks Journal features: journal_incompat_revoke Journal size: 128M Journal length: 32768 Journal sequence: 0x0000158b Journal start: 1 I've tried these things to get rid of the message and usually the badblocks is what does it for me: Run this command and reboot: sudo touch /forcefsck Run badblocks to check the disk: badblocks /dev/sda1 Edit /etc/fstab and change the last "0" which is the fs_passno column accordingly and then reboot: The root filesystem should be specified with a fs_passno of 1, and other filesystems should have a fs_passno of 2. I don't understand: If this is a virtual drive shouldn't it be less prone to errors? Was the image created with one of the flags set? If not what is triggering it? Why is fs_passno set to 0 on Amazon EC2 Ubuntu images? This is not the first one that is like this.

    Read the article

  • PDF from Umbraco | Creating PDF case studies from data in the Umbraco CMS

    - by Vizioz Limited
    Last week we launched the first version of our website based on Umbraco 4.5.2 and this week we have just added a bit of extra functionality to the case studies section which enables you to download the case studies as PDF documents.To do this we used the PDF Creator package by Darren Ferguson, this is actually a wrapper around a product from a company called Ibex, which is where you can download documentation for the mark up required.The way Darren has made the implementation is really simple for anyone already familiar with the Umbraco CMS. You simple create a new template and call a Usercontrol macro, this then does the magic in the background and passes an XSLT file to the ibex engine.What you need to be aware of is that you need to learn a new mark up language called XSL-FO this is actually part of the XSL 1.0 specification and is a language used to express print layouts.As an indication of timescale, from knowing nothing about XSL-FO to the finished product that you can see on the website now has taken me 2 days of learning and just fiddling with the mark up to get the final result.If anyone is interested I might post some code snippets to show you how some of it is done, I would also be really interested to have some feedback about the PDF layout and what you like and don't like about it.Cheers,ChrisPosted using BlogPress from my iPad

    Read the article

  • C#: Adding Functionality to 3rd Party Libraries With Extension Methods

    - by James Michael Hare
    Ever have one of those third party libraries that you love but it's missing that one feature or one piece of syntactical candy that would make it so much more useful?  This, I truly think, is one of the best uses of extension methods.  I began discussing extension methods in my last post (which you find here) where I expounded upon what I thought were some rules of thumb for using extension methods correctly.  As long as you keep in line with those (or similar) rules, they can often be useful for adding that little extra functionality or syntactical simplification for a library that you have little or no control over. Oh sure, you could take an open source project, download the source and add the methods you want, but then every time the library is updated you have to re-add your changes, which can be cumbersome and error prone.  And yes, you could possibly extend a class in a third party library and override features, but that's only if the class is not sealed, static, or constructed via factories. This is the perfect place to use an extension method!  And the best part is, you and your development team don't need to change anything!  Simply add the using for the namespace the extensions are in! So let's consider this example.  I love log4net!  Of all the logging libraries I've played with, it, to me, is one of the most flexible and configurable logging libraries and it performs great.  But this isn't about log4net, well, not directly.  So why would I want to add functionality?  Well, it's missing one thing I really want in the ILog interface: ability to specify logging level at runtime. For example, let's say I declare my ILog instance like so:     using log4net;     public class LoggingTest     {         private static readonly ILog _log = LogManager.GetLogger(typeof(LoggingTest));         ...     }     If you don't know log4net, the details aren't important, just to show that the field _log is the logger I have gotten from log4net. So now that I have that, I can log to it like so:     _log.Debug("This is the lowest level of logging and just for debugging output.");     _log.Info("This is an informational message.  Usual normal operation events.");     _log.Warn("This is a warning, something suspect but not necessarily wrong.");     _log.Error("This is an error, some sort of processing problem has happened.");     _log.Fatal("Fatals usually indicate the program is dying hideously."); And there's many flavors of each of these to log using string formatting, to log exceptions, etc.  But one thing there isn't: the ability to easily choose the logging level at runtime.  Notice, the logging levels above are chosen at compile time.  Of course, you could do some fun stuff with lambdas and wrap it, but that would obscure the simplicity of the interface.  And yes there is a Logger property you can dive down into where you can specify a Level, but the Level properties don't really match the ILog interface exactly and then you have to manually build a LogEvent and... well, it gets messy.  I want something simple and sexy so I can say:     _log.Log(someLevel, "This will be logged at whatever level I choose at runtime!");     Now, some purists out there might say you should always know what level you want to log at, and for the most part I agree with them.  For the most party the ILog interface satisfies 99% of my needs.  In fact, for most application logging yes you do always know the level you will be logging at, but when writing a utility class, you may not always know what level your user wants. I'll tell you, one of my favorite things is to write reusable components.  If I had my druthers I'd write framework libraries and shared components all day!  And being able to easily log at a runtime-chosen level is a big need for me.  After all, if I want my code to really be re-usable, I shouldn't force a user to deal with the logging level I choose. One of my favorite uses for this is in Interceptors -- I'll describe Interceptors in my next post and some of my favorites -- for now just know that an Interceptor wraps a class and allows you to add functionality to an existing method without changing it's signature.  At the risk of over-simplifying, it's a very generic implementation of the Decorator design pattern. So, say for example that you were writing an Interceptor that would time method calls and emit a log message if the method call execution time took beyond a certain threshold of time.  For instance, maybe if your database calls take more than 5,000 ms, you want to log a warning.  Or if a web method call takes over 1,000 ms, you want to log an informational message.  This would be an excellent use of logging at a generic level. So here was my personal wish-list of requirements for my task: Be able to determine if a runtime-specified logging level is enabled. Be able to log generically at a runtime-specified logging level. Have the same look-and-feel of the existing Debug, Info, Warn, Error, and Fatal calls.    Having the ability to also determine if logging for a level is on at runtime is also important so you don't spend time building a potentially expensive logging message if that level is off.  Consider an Interceptor that may log parameters on entrance to the method.  If you choose to log those parameter at DEBUG level and if DEBUG is not on, you don't want to spend the time serializing those parameters. Now, mine may not be the most elegant solution, but it performs really well since the enum I provide all uses contiguous values -- while it's never guaranteed, contiguous switch values usually get compiled into a jump table in IL which is VERY performant - O(1) - but even if it doesn't, it's still so fast you'd never need to worry about it. So first, I need a way to let users pass in logging levels.  Sure, log4net has a Level class, but it's a class with static members and plus it provides way too many options compared to ILog interface itself -- and wouldn't perform as well in my level-check -- so I define an enum like below.     namespace Shared.Logging.Extensions     {         // enum to specify available logging levels.         public enum LoggingLevel         {             Debug,             Informational,             Warning,             Error,             Fatal         }     } Now, once I have this, writing the extension methods I need is trivial.  Once again, I would typically /// comment fully, but I'm eliminating for blogging brevity:     namespace Shared.Logging.Extensions     {         // the extension methods to add functionality to the ILog interface         public static class LogExtensions         {             // Determines if logging is enabled at a given level.             public static bool IsLogEnabled(this ILog logger, LoggingLevel level)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         return logger.IsDebugEnabled;                     case LoggingLevel.Informational:                         return logger.IsInfoEnabled;                     case LoggingLevel.Warning:                         return logger.IsWarnEnabled;                     case LoggingLevel.Error:                         return logger.IsErrorEnabled;                     case LoggingLevel.Fatal:                         return logger.IsFatalEnabled;                 }                                 return false;             }             // Logs a simple message - uses same signature except adds LoggingLevel             public static void Log(this ILog logger, LoggingLevel level, object message)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message);                         break;                     case LoggingLevel.Informational:                         logger.Info(message);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message);                         break;                     case LoggingLevel.Error:                         logger.Error(message);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message);                         break;                 }             }             // Logs a message and exception to the log at specified level.             public static void Log(this ILog logger, LoggingLevel level, object message, Exception exception)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.Debug(message, exception);                         break;                     case LoggingLevel.Informational:                         logger.Info(message, exception);                         break;                     case LoggingLevel.Warning:                         logger.Warn(message, exception);                         break;                     case LoggingLevel.Error:                         logger.Error(message, exception);                         break;                     case LoggingLevel.Fatal:                         logger.Fatal(message, exception);                         break;                 }             }             // Logs a formatted message to the log at the specified level.              public static void LogFormat(this ILog logger, LoggingLevel level, string format,                                          params object[] args)             {                 switch (level)                 {                     case LoggingLevel.Debug:                         logger.DebugFormat(format, args);                         break;                     case LoggingLevel.Informational:                         logger.InfoFormat(format, args);                         break;                     case LoggingLevel.Warning:                         logger.WarnFormat(format, args);                         break;                     case LoggingLevel.Error:                         logger.ErrorFormat(format, args);                         break;                     case LoggingLevel.Fatal:                         logger.FatalFormat(format, args);                         break;                 }             }         }     } So there it is!  I didn't have to modify the log4net source code, so if a new version comes out, i can just add the new assembly with no changes.  I didn't have to subclass and worry about developers not calling my sub-class instead of the original.  I simply provide the extension methods and it's as if the long lost extension methods were always a part of the ILog interface! Consider a very contrived example using the original interface:     // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsWarnEnabled)             {                 _log.WarnFormat("Statement {0} took too long to execute.", statement);             }             ...         }     }     Now consider this alternate call where the logging level could be perhaps a property of the class          // using the original ILog interface     public class DatabaseUtility     {         private static readonly ILog _log = LogManager.Create(typeof(DatabaseUtility));                 // allow logging level to be specified by user of class instead         public LoggingLevel ThresholdLogLevel { get; set; }                 // some theoretical method to time         IDataReader Execute(string statement)         {             var timer = new System.Diagnostics.Stopwatch();                         // do DB magic                                    // this is hard-coded to warn, if want to change at runtime tough luck!             if (timer.ElapsedMilliseconds > 5000 && _log.IsLogEnabled(ThresholdLogLevel))             {                 _log.LogFormat(ThresholdLogLevel, "Statement {0} took too long to execute.",                     statement);             }             ...         }     } Next time, I'll show one of my favorite uses for these extension methods in an Interceptor.

    Read the article

  • IPL 2010 Season ZooZoo Ads Collection

    - by Suganya
    The IPL match is going to begin officially in few hours and things are set absolutely ready to go live among the audience. Almost the entire world is eagerly waiting for this IPL match. In this situation, Vodafone has again started their ZooZoo Ad releases. Yes!!! The ZooZoos are back for this IPL season with new TV ads that would really make the audience to roll on the floor. Last year we collected many ZooZoo ads and posted them in our blog. You can view them here. Likewise this year , we would be updating this post as and when the new ZooZoo ads are released. So mark this page and come back for more ZooZoo ads everyday. Be The Star Of The Match                         ZooZoo Jungle Laugh   ZooZoo EBill   ZooZoo Alien 2   ZooZoo Newspaper   ZooZoo Canon   ZooZoo Tramp Online   ZooZoo Lion   Dangling ZooZoo   ZooZoo Magic Show   Watch More ZooZoo Ads Online Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • Why is IaaS important in Azure&hellip;

    - by Steve Loethen
    Three weeks ago, Microsoft released the next phase of Azure.  I have had several clients waiting on this release.  The fact that they have been waiting and are now more receptive to looking at the cloud.  Customers expressed fear of the unknown.  And a fear of lack of control, even when that lack of control also means a huge degree of flexibility to innovate with concerns about the underlying infrastructure.  I think IaaS will be that “gateway drug” to get customers who have been hesitant to take another look at the cloud.  The dialog can change from the cloud being this big scary unknown to a resource for workloads.  The conversations should have always been, and can know be even stronger, geared toward the following points: 1) The cloud is not unicorns and glitter, the cloud is resources.  Compute, storage, db’s, services bus, cache…..  Like many of the resources we have on-premise.  Not magic, just another resource with advantages and obstacles like any other resource. 2) The cloud should be part of the conversation for any new project.  All of the same criteria should be applied, on-premise or off.  Cost, security, reliability, scalability, speed to deploy, cost of licenses, need to customize image, complex workloads.  We have been having these discussions for years when we talk about on-premise projects.  We make decisions on OS’s, Databases, ESB’s, configuration and products based on a myriad of factors.  We use the same factors but now we have a additional set of resources to consider in our process. 3) The cloud is a great solution looking for some interesting problems.  It is our job to recognize the right problems that fit into the cloud, weigh the factors and decide what to do. IaaS makes this discussion easier, offers more choices, and often choices that many enterprises will find more better than PaaS.  Looking forward to helping clients realize the power of the cloud.

    Read the article

  • Oracle SQL Developer version 3.2.2 Released

    - by thatjeffsmith
    This is another maintenance release, but I don’t want to minimize the work done in either the 3.2.1 or the 3.2.2 editions. The two releases include more than 400 bug fixes. Version 3.2 should be rocking and rolling and good to go while we work on the next major release! You can find the downloads and bug fixes in the normal places: Download 3.2.2 Bug fixes Connection Names If you downloaded and used version 3.2.1 and noticed some of your connection names were no longer valid due to ‘special’ characters, we’ve loosed our restrictions a bit for 3.2.2. You can now go back to using spaces and hyphens in your connection names. periods, spaces, hyphens should now all work More Copy & Paste Stuff While fixing a bug, the developer decided to also enhance the feature while he was in the code. I love seeing this happen organically. No one is sitting over their shoulder with the red magic marker. No, I’m too far away to do that except on very special days So here’s a ‘trick’ – if you want to copy cells from your grids, just drag the selected cells to the worksheet/editor. You’ll get a comma delimited list – very handy! Select cells, drag and drop up to the worksheet – Voila! Comma separated values

    Read the article

  • Vision, Integration, Ability—Oracle is once again positioned as an E-Commerce Leader

    - by Jeri Kelley
    The new Gartner report is the fifth successive Magic Quadrant for E-Commerce to position Oracle as a leader. We’re proud of the result, but we’re not too surprised. Oracle Commerce’s functionality is uniquely aligned with a number of the major market trends Gartner describes in its report: from customers ‘expecting a seamless buying experience across all channels’, to organizations seeking to consolidate ‘B2B and B2C applications with a single underlying platform’. What we think sets Oracle Commerce apart Why are we a leader? We believe the key strengths of Oracle Commerce include: Outstanding Scalability and VersatilityOracle has a long and enviable track record of delivering B2B and B2C e-commerce solutions, and the Oracle Commerce solution supports a broad range of vertical industries – from retail to telecom, and manufacturing to distribution. Additionally, Oracle Commerce is engineered to scale simply and quickly to meet the changing needs of the enterprise. Oracle IntegrationOur commitment to seamless solutions integration allows customers to get the most from our ever evolving range of e-commerce and CX products—and deliver consistent, relevant, and personalized cross-channel buying experiences that drive customer satisfaction, and boost revenue. Experience and VisionOracle has a long and impressive history of delivering B2B and B2C e-commerce solutions to the world’s best brands. We’re constantly putting this experience to good use, and making our solutions even smarter. With powerful merchandising and business tools, and advanced promotions capabilities, Oracle Commerce is one of the most forward-thinking e-commerce solutions around. Read the reportYou can read Gartner’s full report here, or click here to find out more about our celebrated platform.

    Read the article

  • How can I create a 4TB partition on my software RAID5 device?

    - by Kris Harper
    I have set up a RAID5 device with three 2TB hard drives using mdadm. The device was successfully created, but I cannot seem to create a partition on the device. When I try to make an ext3 or ext4 partition via Disk Utility, I get the following error Error creating partition: helper exited with exit code 1: In part_add_partition: device_file=/dev/md0, start=0, size=4000526106624, type= Entering MS-DOS parser (offset=0, size=4000526106624) MSDOS_MAGIC found found partition type 0xee => protective MBR for GPT Exiting MS-DOS parser Entering EFI GPT parser GPT magic found partition_entry_lba=2 num_entries=128 size_of_entry=128 Leaving EFI GPT parser EFI GPT partition table detected containing partition table scheme = 3 got it got disk new partition guid '' is not valid type '' for GPT appear to be malformed I have seen this question, but that seems to suggest using gparted to do the partitioning. I'm fine with doing that, but my RAID device doesn't show up in the list of gparted devices. I suspect because this is a RAID and not a regular disk. I have already created a GPT partition table on the device. How can I add a partition to my device?

    Read the article

  • Twitter API Voting System

    - by Richard Jones
    So I blatantly got this idea from the MIX 10 event. At MIX they held a rockband talent competition type thing (I’m not quite sure of all the details).    But the interesting part for me is how they collected votes. They used Twitter (what else, when you have a few thousand geeks available to you). The basic idea was that you tweeted your vote with a # tag, i.e #ROCKBANDVOTE vote Richard How cool….    So the question is how do you write something to collate and count all the votes?   Time to press the magic Visual Studio new Project button… Twitter has a really nice API that can be invoked from .NET.   This is the snippet of code that will search for any given phrase i.e #ROCKBANDVOTE   public static XmlDocument GetSearchResults(string searchfor) { return GetSearchResults(searchfor, ""); }   public static XmlDocument GetSearchResults(string searchfor, string sinceid) { XmlDocument retdoc = new XmlDocument();   try { string url = "http://search.twitter.com/search.atom?&q=" + searchfor; if (sinceid.Length > 0) url += "since_id=" + sinceid; HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url); request.Method = "POST"; request.ContentType = "application/x-www-form-urlencoded"; WebResponse res = request.GetResponse(); retdoc.Load(res.GetResponseStream()); res.Close();   } catch { } return retdoc; } } I’ve got two overloads, that optionally let you pass in the last ID to look for as well as what you want to search for. Note that Twitter rate limits the amount of requests you can send,  see http://apiwiki.twitter.com/Rate-limiting So realistically I wanted my app to run every hour or so and only pull out results that haven’t been received before (hence the overload to pass in the sinceid parameter). I’ll post the code when finished that parses the returned XML.

    Read the article

  • What is the Xbox360's D3DRS_VIEWPORTENABLE equivalent on WinXP D3D9?

    - by Jim Buck
    I posted this on StackOverlow, but of course it should be posted here. I am maintaining a multiplatform codebase for Xbox360 and WinXP. I am seeing an issue on the XP side that appears to be related to D3DRS_VIEWPORTENABLE on the Xbox360 version not having an equivalent on WinXP D3D9. This article had an interesting idea, but the only way to construct an identity matrix is to supply negative numbers to D3DVIEWPORT9::X and D3DVIEWPORT9::Height, but they are unsigned numbers. (I tried to put in negative numbers anyway, but nothing interesting happened.) So, how does one emulate the behavior of D3DRS_VIEWPORTENABLE under WinXP/D3D9? (For clarity, the result I'm seeing is that a 2d screen-aligned quad works fine on Xbox360 but is offset/stretched on WinXP. In fact, the (0, 0) starts in the center of the screen on WinXP instead of in the lower-left corner like on the Xbox360 as a result of applying the viewport transform.) Update: I didn't have an Xbox360 devkit at the time I wrote up this question, but I've since gotten one. I commented out the disabling of the D3DRS_VIEWPORTENABLE state, and the exact same behavior resulted on the Xbox360 as on the WinXP build. So, there must be some DirectX magic to bridge the gap here for emulating D3DRS_VIEWPORTENABLE being turned off on WinXP.

    Read the article

  • NRF Big Show 2011 -- Part 3

    - by David Dorf
    I'm back from the NRF show having been one of the lucky people who's flight was not canceled. The show was very crowded with a reported 20% increase in attendance and everyone seemed in high spirits. After two years of sluggish retail sales, things are really picking up and it was reflected in everyone's mood. The pop-up Disney Store in the Oracle booth was great and attracted lots of interest in their mobile POS. I know many attendees visited the Disney Store in Times Square to see the entire operation. It's an impressive two-story store that keeps kids engaged. The POS demonstration station, where most of our innovations were demoed, was always crowded. Unfortunately most of the demos used WiFi and the signals from other booths prevented anything from working reliably. Nevertheless, the demo team did an excellent job walking people through the scenarios and explaining how shopping is being impacted by mobile, analytics, and RFID. Big Show Links Disney uncovers its store magic Top 10 Things You Missed at the NRF Big Show 2011 Oracle Retail Stores Innovation Station at NRF Big Show 2011 (video) The buzz of the show was again around mobile solutions. Several companies are creating mobile POS using the iPod Touch, including integrations to Oracle POS for the following retailers: Disney Stores with InfoGain Victoria's Secret with InfoGain Urban Outfitters with Starmount The Gap with Global Bay Keeping with the mobile theme, the NRF release a revised version of their Mobile Blueprint at NRF. It will be posted to the NRF site very soon. The alternate payments section had a major rewrite that provides a great overview and proximity and remote payment technologies. NRF Mobile Blueprint Links New mobile blueprint provides fresh insights NRF Mobile Blueprint 2011 (slides) I hope to do some posts on some of the interesting companies I spoke with in the coming weeks.

    Read the article

  • Get the Information You Need. Delivered.

    - by Get Proactive Customer Adoption Team
    Untitled Document Don’t Take Chances with Alerts—Get Hot Topics When Oracle Support publishes an alert, how do you find out about it? I can see any number of ways you might stumble onto an alert that you need. For example, if you are visiting My Oracle Support in search of answers under the Knowledge tab and happen to notice, and click on, the Alert tab the under the Knowledge Article region, you might see an alert listed for one of the products you use. There are other ways… like subscribing to one of the Oracle Blogs and finding the alert in your RSS feed because the blogger decided to write up that topic for the latest post. I’m sure your colleagues sometimes pass on critical alerts for your products, I hope, giving you the information before you needed it. Well, no matter how you learn about an alert, the important point is that you get the correct information in a timely way. Right? I must admit, the ‘magic’ required to find out via these methods makes me nervous. Rather than leave it to chance, I think you need a more reliable way to stay informed and receive alerts for your products when Oracle publishes them. You may not be aware of it, but there is a better way. Oracle Premier Support Customers can leverage the “Hot Topics E-Mail.” You select the products and topics that interest you. Based on your choices, the system sends you the support related information when Oracle Support publishes it. This way you and I can both relax, knowing you’ll have ready access to the alerts you need, and enjoy the breadth of support related information you choose to subscribe to. This can include recently updated Knowledge base articles, new bugs, and product news. If I’ve convinced you, you will want to know how to set up and subscribe to the Hot Topics E-Mail. The complete guide, Doc ID 793436.1, is waiting for you. Follow the instructions in the document, and you will always stay on top of the latest information from Oracle Support.

    Read the article

  • Convert your Hash keys to object properties in Ruby

    - by kerry
    Being a Ruby noob (and having a background in Groovy), I was a little surprised that you can not access hash objects using the dot notation.  I am writing an application that relies heavily on XML and JSON data.  This data will need to be displayed and I would rather use book.author.first_name over book[‘author’][‘first_name’].  A quick search on google yielded this post on the subject. So, taking the DRYOO (Don’t Repeat Yourself Or Others) concept.  I came up with this: 1: class ::Hash 2:  3: # add keys to hash 4: def to_obj 5: self.each do |k,v| 6: if v.kind_of? Hash 7: v.to_obj 8: end 9: k=k.gsub(/\.|\s|-|\/|\'/, '_').downcase.to_sym 10: self.instance_variable_set("@#{k}", v) ## create and initialize an instance variable for this key/value pair 11: self.class.send(:define_method, k, proc{self.instance_variable_get("@#{k}")}) ## create the getter that returns the instance variable 12: self.class.send(:define_method, "#{k}=", proc{|v| self.instance_variable_set("@#{k}", v)}) ## create the setter that sets the instance variable 13: end 14: return self 15: end 16: end This works pretty well.  It converts each of your keys to properties of the Hash.  However, it doesn’t sit very well with me because I probably will not use 90% of the properties most of the time.  Why should I go through the performance overhead of creating instance variables for all of the unused ones? Enter the ‘magic method’ #missing_method: 1: class ::Hash 2: def method_missing(name) 3: return self[name] if key? name 4: self.each { |k,v| return v if k.to_s.to_sym == name } 5: super.method_missing name 6: end 7: end This is a much cleaner method for my purposes.  Quite simply, it checks to see if there is a key with the given symbol, and if not, loop through the keys and attempt to find one. I am a Ruby noob, so if there is something I am overlooking, please let me know.

    Read the article

  • Dell Docking Station Doesn’t Detect USB Mouse and Keyboard

    - by Ben Griswold
    I’ve found myself in this situation with multiple Dell docking stations and multiple Dell laptops running various Windows operating systems.  I don’t know why the docking station stops recognizing my USB mouse and keyboard – it just does.  It’s black magic.  The last time around I just starting plugging the mouse and keyboard into the docked laptop directly and went about my business (as if I wasn’t completing missing out on a couple of the core benefits of using a docking station.)  I guess that’s what happens when you forget how you got yourself out of the mess the last time around.  I had been in this half-assed state for a couple of weeks now, but a coworker fortunately got themselves in and out of the same pickle this morning.  Procrastinate long enough and the solution will just come to you, right? Here’s how to get yourself out of this mess: Undock your computer Unplug your docking station Count to an arbitrary number greater than 12.  (Not sure this is really required, but…) Plug your docking station back in Redock your machine I put my machine to sleep before taking the aforementioned actions.  My coworker completely shutdown his laptop instead.  The steps worked on both of our Win 7 machines this morning and, who knows, it might just work for you too. 

    Read the article

  • 1.5 million Windows 7 phone’s sold…

    - by Boonei
    Microsoft announced that it has sold over 1.5 million windows 7 phone devices. Windows 7 is a new generation of OS. Mobile operators/users/device programmers need to adopt the same. Its not going to be a easy transition because it’s not an advanced/next version of win 6.x for mobile. We have heard that development from Microsoft side for Win 6.x devices will not continue after sometime. Don’t know how long will get the support! Everything in it s quite new, like OS, User interface, XBox sync, and also requires mobile phone companies to run the OS on high end chips, meaning atleast 1GHz. So the user segment occupied by phones like HTC Wild Fire are not the ones targeted.   Hey ! There an is a catch with this magic number 1.5 million…. It depicts only the number of units sold to mobile operators and retailers. It’s not the number of actual units held in consumers hands and activated. The number could improve significantly in 2011 where Sprint and Verizon join the party in United States. Atleast dozen phone models are in line up now in the rest of the world running Win 7 OS. One good things that customers can rejoice is that Microsoft will direly push software updates to all its consumers. Operator will not interfere. We can expect strong sales going forward with just this important point where Google’s Android lacks the same. [Img Credit: Microsoft] This article titled,1.5 million Windows 7 phone’s sold…, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

< Previous Page | 72 73 74 75 76 77 78 79 80 81 82 83  | Next Page >