Search Results

Search found 1636 results on 66 pages for 'streaming'.

Page 61/66 | < Previous Page | 57 58 59 60 61 62 63 64 65 66  | Next Page >

  • WiFi signal is lost every 3 minutes

    - by Software Monkey
    For several weeks now my Android phone has been losing it's WiFi signal momentarily, typically at about 3 minute intervals (about 3 minutes, 1.5 seconds) and occasionally at some longer interval that always seems to be just over 3 minutes. This causes an interruption of several seconds while the WiFi connection is re-established and typically fails any kind of download/streaming that is happening, makes web sites "unreachable" and generally makes the phone unusable as a data device due to the frequency. The signal remains down for about a second, but the phone takes a few more seconds to reconnect to the router. This happens regardless of proximity to the router, which otherwise show a very strong signal - usually -40 to -30 dBm or better in the same room, nowhere less than 060 dBm anywhere in the house. Changing channels does not help (I've tried 1, 4, 8 and 9). Nor does turning off the router's guest access. Nor does turning off the 5.0 GHz band. Monitoring the signal on my phone with WiFi analyzer, shows all WiFi signals on all channels drop to nothing when the WiFi connection is lost (there are two other networks on different channels which are strong enough to be relevant, with about 6 others constantly fading in and out). WiFi analyzer shows 3 separate signals for my router, the main 2.4 GHz, the guest 2.4 GHz and the 5.0 GHz. Using WiFi Analyzer on my wife's phone side-by-side shows no change in signal when my phone drops, nor does her phone drop. Monitoring the signal using our laptop, side-by-side likewise shows no signal loss and likewise the laptop does not lose it's WiFi connection. But, at work, the phone seems to not exhibit the same behavior, or, if it does, it's very occasional. Monitoring it all day at work I only saw the signal drop 3 or 4 times. The signal strength of the various networks there is comparatively weak. AT&T were super helpful: "Sorry, we can't help you with WiFi problems. You could try doing a factory reset on your phone". </sarcasm The router is relatively new, but has been working fine with this phone since last Dec. Phone : Motorola Atrix (about 8 months old). Router : Belkin N750 DB (F9K1103 v1 (01C)). Router Firmware: 1.00.46 (2011/10/28 6:37:11). Security : WPA/WPA2-Personal (PSK)

    Read the article

  • Educate me - should I buy these prebuilt NAS (which is better) or make my own?

    - by user29336
    I'm trying to learn as much as possible, and I think I've learned quite a bit so bear with me here under my confusion. I found a coupe NAS setups. I'm not sure if one is better than the other, other than the price being higher on some, and some coming with drives VS not. Let me list my setup so you can get an idea of what I want to provide: Macbook Pro Macbook Mini for Media streaming (so far) Windows 7 Gaming Computer Xbox 360 I'd like to provide a storage system for all these devices so they can access files very easily, I'd also like any of these devices to be able to stream media from this storage system. I'd like this storage system to be hassle free in terms of my confidence in the data integrity. If a drive fails, I want to know that I can replace the drive and all my files will still exist. I'd like to access this storage system OUTSIDE of my LAN. If I'm out on a job for work I'd like to go in, or be able to have people DL some files. This brings me to a question, is this what iSCSI is? I'd like this data system to be able to download torrents. I want to mount any drive on this storage system onto my OSX laptop as if it were a local drive attached. (Is this with iSCSI is?) I'd like this system to have a GOOD web based GUI. I don't want to install software to use it. I believe those are the most of my requirements. If I'm missing something that I have no knowledge about, can someone educate me? Here are the systems I found: $729ish on Newegg Lacie 5Big Network 2 (comes with 5TB of space. iSCSI / mac compatible, torrents, nice ui, + others?) Is this overpriced for what it provides? It almost seems like a great deal to me because of the 5TB of space it comes with vs the other NAS systems that don't come with storage but cost $600-700. Should I get a different NAS system? Netgear? Others? Do they have same features? Better? Is it better to buy your own disks? What about making my own? I'm tech savy all around. It seems cheaper to buy a premade one especially with the support/warranty it provides...

    Read the article

  • turn off disable the performance cache

    - by jessie
    OK I run a streaming website and my CMS is giving me an error when uploading videos "Failed To Find Flength File" ok so I did some research. The answer I got from the coder was below. I did do all that, but the only thing I could not do is turn off what he refers to as performance cache, talked about in the last sentence... I am on a Cent OS Assuming the script is set up properly, you are probably dealing with some kind of write-caching. Some servers perform write-caching which prevents writing out the flength file or the entire CGITemp file during the upload. The flength file or the CGITemp file do not actually hit the disk until the upload is complete, making it worthless for reporting on progress during the upload. This may be fixed using a .htaccess file assuming your host supports them. Here is a link to an excellent tutorial on using .htaccess files. I strongly recommend giving it a quick read before attempting to install your own .htaccess file. 1. A mod_security module for Apache. To fix it just create a file called .htaccess (that's a period followed by "htaccess") and put the following lines in that file. Upload the file into the directory where the Uber-Uploader CGI ".pl" scripts resides, or in some directory above it (like your server's DOCUMENT_ROOT, i.e. the top-level of your webspace). htaccess files must be uploaded as ASCII mode, not BINARY. You may need to CHMOD the htaccess file to 644 or (RW-R--R--). # Turn off mod_security filtering. SecFilterEngine Off # The below probably isn't needed, # but better safe than sorry. SecFilterScanPOST Off If the above method does not work, try putting the following lines into the file SetEnvIfNoCase Content-Type \ "^multipart/form-data;" "MODSEC_NOPOSTBUFFERING=Do not buffer file uploads" mod_gzip_on No 2. "Performance Cache" enabled on OS X SERVER. If you're running OS X Server and the progress bar isn't working, it could be because of "performance caching." Apparently if ANY of your hosted sites are using performance caching, then by default, all sites (domains) will attempt to. The fix then is to disable the performance cache on all hosted sites.

    Read the article

  • Slow WLAN file transfer between server and tablet

    - by user266985
    My file server is running Ubuntu 12.04 and I'm sharing files from it over samba. It is connected via gigabit ethernet. My desktop, running Windows 8.1, is also connected via gigabit ethernet. I can transfer files between the two and completely saturate that gigabit pipe. However, I just got a Surface Pro 2, and I'm trying to stream HD movies from my server to the device over WiFi. For some reason, I can't break much past 1.5MB/s transferring files over the network. I've tried streaming through XBMC and a standard file copy; no difference. To add the confusion, if I connect to my guest network and then use my VPN server (installed on the router) to access the file server, I get around 3.2MB/s. I've been running diagnostics to determine the root and I think I've found it but I have no idea what is causing it or how to fix it. Router: Asus RT-N66U Surface Pro 2 Network Card: Marvell Avastar 350N (Driver 19/09/2013 v14.69.24044.150) InSSIDer: Link Score: 100 Co-Channels: 0 Overlapping: 0 5GHz Network Channel: 48+44 iperf File Server as Server; Surface Pro 2 as Client - TCP Performance: Acceptable ------------------------------------------------------------ Server listening on TCP port 5001 TCP window size: 85.3 KByte (default) ------------------------------------------------------------ [ 4] local 192.168.0.90 port 5001 connected with 192.168.0.56 port 57367 [ ID] Interval Transfer Bandwidth [ 4] 0.0- 1.0 sec 10.1 MBytes 84.7 Mbits/sec [ 4] 1.0- 2.0 sec 10.4 MBytes 87.6 Mbits/sec [ 4] 2.0- 3.0 sec 10.6 MBytes 88.8 Mbits/sec [ 4] 3.0- 4.0 sec 10.7 MBytes 89.5 Mbits/sec [ 4] 4.0- 5.0 sec 10.1 MBytes 84.4 Mbits/sec [ 4] 5.0- 6.0 sec 10.2 MBytes 85.8 Mbits/sec [ 4] 6.0- 7.0 sec 7.04 MBytes 59.1 Mbits/sec [ 4] 7.0- 8.0 sec 10.8 MBytes 90.2 Mbits/sec [ 4] 8.0- 9.0 sec 10.6 MBytes 89.1 Mbits/sec [ 4] 9.0-10.0 sec 8.62 MBytes 72.3 Mbits/sec [ 4] 0.0-10.0 sec 99.2 MBytes 83.1 Mbits/sec iperf Surface Pro 2 as Server, File Server as Client Performance: Poor ------------------------------------------------------------ Client connecting to 192.168.0.56, TCP port 5001 TCP window size: 22.9 KByte (default) ------------------------------------------------------------ [ 3] local 192.168.0.90 port 40233 connected with 192.168.0.56 port 5001 [ ID] Interval Transfer Bandwidth [ 3] 0.0- 1.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 1.0- 2.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 2.0- 3.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 3.0- 4.0 sec 1.25 MBytes 10.5 Mbits/sec [ 3] 4.0- 5.0 sec 1.62 MBytes 13.6 Mbits/sec [ 3] 5.0- 6.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 6.0- 7.0 sec 1.38 MBytes 11.5 Mbits/sec [ 3] 7.0- 8.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 8.0- 9.0 sec 1.50 MBytes 12.6 Mbits/sec [ 3] 9.0-10.0 sec 1.62 MBytes 13.6 Mbits/sec [ 3] 0.0-10.1 sec 15.0 MBytes 12.4 Mbits/sec For some reason, it gets capped and I haven't got a clue why. Any suggestions? Edit: My link speed is reported as 270Mbps by Windows. I'm less than two metres from the router with a clear line of sight.

    Read the article

  • Architectural advice - web camera remote access

    - by Alan Hollis
    I'm looking for architectural advice. I have a client who I've built a website for which essentially allows users to view their web cameras remotely. The current flow of data is as follows: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Ftp connection is enabled for the cameras ftp user. Web camera opens ftp connection to server. Web camera begins taking photos. Web camera sends photo to ftp server. On image url request: Server reads latest image on hard drive uploaded via ftp for camera. Server deleted any older images from the server. This is working okay at the moment for a small amount of users/cameras ( about 10 users and around the same amount of cameras), but we're starting to worrying about the scalability of this approach. My original plan was instead of having the files read from the server, the web server would open up an ftp connection to the web server and read the latest images directly from there meaning we should have been able to scale horizontally fairly easily. But ftp connection establishment times were too slow ( mainly due to the fact that PHP out of the ox is unable to persist ftp connections ) and so we abandoned this approach and went straight for reading from the hard drive. The firmware provider for the cameras state they're able to build a http client which instead of using ftp to upload the image could post the image to a web server. This seems plausible enough to me, but I'm looking for some architectural advice. My current thought is a simple Nginx/PHP/Redis stack. Web camera issues post requests of latest image to Nginx/PHP and the latest image for that camera is stored in Redis. The clients can then pull the latest image from Redis which should be extremely quick as the images will always be stored in memory. The data flow would then become: User opens page to view web camera image. Javascript script polls url on server ( appended with unique timestamp ) every 1000ms Camera is sent an http request to start posting images to a provided url Web camera begins taking photos. Web camera sends post requests to server as fast as it can On image url request: Server reads latest image from redis Server tells redis to delete later image My questions are: Are there any greater overheads of transferring images via HTTP instead of FTP? Is there a simple way to calculate how many potential cameras we could have streaming at once? Is there any way to prevent potentially DOS'ing our own servers due to web camera requests? Is Redis a good solution to this problem? Should I abandon PHP/Ngix combination and go for something else? Is this proposed solution actually any good? Will adding HTTPs to the mix cause posting the image to become too slow? Thanks in advance Alan

    Read the article

  • Best WordPress Video Themes for a Video Blog

    - by Matt
    WordPress has made blogging so easy & fun, there are plenty of video blog themes that you can pick from. However there is always rarity in quality. We at JustSkins have gathered some high quality, tested, tried video themes list. We tried to find some WordPress themes for vloggers, we knew all along that there are very few yet some of them are just brilliant premium wordpress themes. More on that later, let’s find out some themes which you can install on your vlog right now. On Demand 2.0 A fully featured video WordPress premium theme from Press75. Includes  theme options panel for personal customization and content management options, post thumbnails, drop down navigation menu, custom widgets and lots more. Demo | Price: $75 | DOWNLOAD VideoZoom An outstanding premium WordPress video theme from WPZoom featuring standard video integration plus additionally it lets you play any video from all the popular video websites. VideoZoom theme also includes a featured video slider on the homepage, multiple post layout options, theme options panel, WordPress 3.0 menus, backgrounds etc. Demo | Price Single: $69, Developer: $149 | DOWNLOAD Vidley Press75′s easy to use premium WordPress video theme. This theme is full of great features, it can be a perfect choice if you intend to make it a portal someday..it is scalable to shape like a news portal or portfolios. The Theme is widget ready. It has ability to place Featured Content and Featured Category section on homepage. The drop down menus on this theme are nifty! Demo | Price $75 |  DOWNLOAD Live A video premium WordPress theme designed for streaming video, and live event broadcasting. You can embed live video broadcasts from third party services like Ustream etc, and features a prominent timer counting down to the next broadcast, rotating bumper images, Facebook and twitter integration for viewer interaction, theme admin options panel and more make this theme one of its kind. Demo | Price: $99, Support License: $149| DOWNLOAD Groovy Video Woo Themes is pioneer in making beautiful wordpress themes,  One such theme that is built by keeping the video blogger in mind. The Groovy Theme is very colourful video blog premium WordPress theme. Creating video posts is quick and easy with just a copy / paste of the video’s embed code. The theme enables automatic video resizing, plenty of widgets. Also allows you to pick color of your choice. Price: Single Use $70, Developer Price : $150 | DOWNLOAD Video Flick Another exciting Video blogging theme by Press75 is the Video Flick theme. Video Flick is compatible with any video service that provides embed code, or if you want to host your own videos, Video Flick is also compatible with FLV (Flash Video) and Quicktime formats. This theme allows you to either keep standard Blog and/or have Video posts. You can pick a light or dark color option. Demo | Price : $75 | DOWNLOAD Woo Tube An excellent video premium WordPress theme from Woothemes, the WooTube theme is a very easy video blog platform, as it comes with  automatic video resizing, a completely widgetised sidebar and 7 different colour schemes to choose from. The theme  has the ability to be used as a normal blog or a gallery. A very wise choice! Price: Single Use $70, Developer Price : $150 | DOWNLOAD eVid Theme One of the nicest WordPress theme designed specifically for the video bloggers. Simple to integrate videos from video hosts such as Youtube, Vimeo, Veoh, MetaCafe etc. Demo | Price: $19 | DOWNLOAD Tubular A video premium WordPress theme from StudioPress which can also be used as a used a simple website or a blog. The theme is also available in a light color version. Demo | Price: $59.95 | DOWNLOAD Video Elements 2.0 Another beautiful video premium WordPress theme from Press75. Video Elements 2.0 has been re-designed to include the features you need to easily run and maintain a video blog on WordPress. Demo | Price: $75 | DOWNLOAD TV Elements 3.0 The theme includes a featured video carousel on the homepage which can display any number of videos, a featured category section which displays up to 12 channels, creates automatic thumbnails and a lots more… Demo | Price: $75 | DOWNLOAD Wave A beautiful premium video wordpress theme, Flexible & Super cool looking. The Design has very earthy feel to it. The theme has featured video area & latest listing on the homepage. All in all a simple design no fancy features. Demo | Price: $35 | Download

    Read the article

  • SmtpClient and Locked File Attachments

    - by Rick Strahl
    Got a note a couple of days ago from a client using one of my generic routines that wraps SmtpClient. Apparently whenever a file has been attached to a message and emailed with SmtpClient the file remains locked after the message has been sent. Oddly this particular issue hasn’t cropped up before for me although these routines are in use in a number of applications I’ve built. The wrapper I use was built mainly to backfit an old pre-.NET 2.0 email client I built using Sockets to avoid the CDO nightmares of the .NET 1.x mail client. The current class retained the same class interface but now internally uses SmtpClient which holds a flat property interface that makes it less verbose to send off email messages. File attachments in this interface are handled by providing a comma delimited list for files in an Attachments string property which is then collected along with the other flat property settings and eventually passed on to SmtpClient in the form of a MailMessage structure. The jist of the code is something like this: /// <summary> /// Fully self contained mail sending method. Sends an email message by connecting /// and disconnecting from the email server. /// </summary> /// <returns>true or false</returns> public bool SendMail() { if (!this.Connect()) return false; try { // Create and configure the message MailMessage msg = this.GetMessage(); smtp.Send(msg); this.OnSendComplete(this); } catch (Exception ex) { string msg = ex.Message; if (ex.InnerException != null) msg = ex.InnerException.Message; this.SetError(msg); this.OnSendError(this); return false; } finally { // close connection and clear out headers // SmtpClient instance nulled out this.Close(); } return true; } /// <summary> /// Configures the message interface /// </summary> /// <param name="msg"></param> protected virtual MailMessage GetMessage() { MailMessage msg = new MailMessage(); msg.Body = this.Message; msg.Subject = this.Subject; msg.From = new MailAddress(this.SenderEmail, this.SenderName); if (!string.IsNullOrEmpty(this.ReplyTo)) msg.ReplyTo = new MailAddress(this.ReplyTo); // Send all the different recipients this.AssignMailAddresses(msg.To, this.Recipient); this.AssignMailAddresses(msg.CC, this.CC); this.AssignMailAddresses(msg.Bcc, this.BCC); if (!string.IsNullOrEmpty(this.Attachments)) { string[] files = this.Attachments.Split(new char[2] { ',', ';' }, StringSplitOptions.RemoveEmptyEntries); foreach (string file in files) { msg.Attachments.Add(new Attachment(file)); } } if (this.ContentType.StartsWith("text/html")) msg.IsBodyHtml = true; else msg.IsBodyHtml = false; msg.BodyEncoding = this.Encoding; … additional code omitted return msg; } Basically this code collects all the property settings of the wrapper object and applies them to the SmtpClient and in GetMessage() to an individual MailMessage properties. Specifically notice that attachment filenames are converted from a comma-delimited string to filenames from which new attachments are created. The code as it’s written however, will cause the problem with file attachments not being released properly. Internally .NET opens up stream handles and reads the files from disk to dump them into the email send stream. The attachments are always sent correctly but the local files are not immediately closed. As you probably guessed the issue is simply that some resources are not automatcially disposed when sending is complete and sure enough the following code change fixes the problem: // Create and configure the message using (MailMessage msg = this.GetMessage()) { smtp.Send(msg); if (this.SendComplete != null) this.OnSendComplete(this); // or use an explicit msg.Dispose() here } The Message object requires an explicit call to Dispose() (or a using() block as I have here) to force the attachment files to get closed. I think this is rather odd behavior for this scenario however. The code I use passes in filenames and my expectation of an API that accepts file names is that it uses the files by opening and streaming them and then closing them when done. Why keep the streams open and require an explicit .Dispose() by the calling code which is bound to lead to unexpected behavior just as my customer ran into? Any API level code should clean up as much as possible and this is clearly not happening here resulting in unexpected behavior. Apparently lots of other folks have run into this before as I found based on a few Twitter comments on this topic. Odd to me too is that SmtpClient() doesn’t implement IDisposable – it’s only the MailMessage (and Attachments) that implement it and require it to clean up for left over resources like open file handles. This means that you couldn’t even use a using() statement around the SmtpClient code to resolve this – instead you’d have to wrap it around the message object which again is rather unexpected. Well, chalk that one up to another small unexpected behavior that wasted a half an hour of my time – hopefully this post will help someone avoid this same half an hour of hunting and searching. Resources: Full code to SmptClientNative (West Wind Web Toolkit Repository) SmtpClient Documentation MSDN © Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  

    Read the article

  • Azure &ndash; Part 6 &ndash; Blob Storage Service

    - by Shaun
    When migrate your application onto the Azure one of the biggest concern would be the external files. In the original way we understood and ensure which machine and folder our application (website or web service) is located in. So that we can use the MapPath or some other methods to read and write the external files for example the images, text files or the xml files, etc. But things have been changed when we deploy them on Azure. Azure is not a server, or a single machine, it’s a set of virtual server machine running under the Azure OS. And even worse, your application might be moved between thses machines. So it’s impossible to read or write the external files on Azure. In order to resolve this issue the Windows Azure provides another storage serviec – Blob, for us. Different to the table service, the blob serivce is to be used to store text and binary data rather than the structured data. It provides two types of blobs: Block Blobs and Page Blobs. Block Blobs are optimized for streaming. They are comprised of blocks, each of which is identified by a block ID and each block can be a maximum of 4 MB in size. Page Blobs are are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob. They are a collection of pages. The maximum size for a page blob is 1 TB.   In the managed library the Azure SDK allows us to communicate with the blobs through these classes CloudBlobClient, CloudBlobContainer, CloudBlockBlob and the CloudPageBlob. Similar with the table service managed library, the CloudBlobClient allows us to reach the blob service by passing our storage account information and also responsible for creating the blob container is not exist. Then from the CloudBlobContainer we can save or load the block blobs and page blobs into the CloudBlockBlob and the CloudPageBlob classes.   Let’s improve our exmaple in the previous posts – add a service method allows the user to upload the logo image. In the server side I created a method name UploadLogo with 2 parameters: email and image. Then I created the storage account from the config file. I also add the validation to ensure that the email passed in is valid. 1: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 2: var accountContext = new DynamicDataContext<Account>(storageAccount); 3:  4: // validation 5: var accountNumber = accountContext.Load() 6: .Where(a => a.Email == email) 7: .ToList() 8: .Count; 9: if (accountNumber <= 0) 10: { 11: throw new ApplicationException(string.Format("Cannot find the account with the email {0}.", email)); 12: } Then there are three steps for saving the image into the blob service. First alike the table service I created the container with a unique name and create it if it’s not exist. 1: // create the blob container for account logos if not exist 2: CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient(); 3: CloudBlobContainer container = blobStorage.GetContainerReference("account-logo"); 4: container.CreateIfNotExist(); Then, since in this example I will just send the blob access URL back to the client so I need to open the read permission on that container. 1: // configure blob container for public access 2: BlobContainerPermissions permissions = container.GetPermissions(); 3: permissions.PublicAccess = BlobContainerPublicAccessType.Container; 4: container.SetPermissions(permissions); And at the end I combine the blob resource name from the input file name and Guid, and then save it to the block blob by using the UploadByteArray method. Finally I returned the URL of this blob back to the client side. 1: // save the blob into the blob service 2: string uniqueBlobName = string.Format("{0}_{1}.jpg", email, Guid.NewGuid().ToString()); 3: CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName); 4: blob.UploadByteArray(image); 5:  6: return blob.Uri.ToString(); Let’s update a bit on the client side application and see the result. Here I just use my simple console application to let the user input the email and the file name of the image. If it’s OK it will show the URL of the blob on the server side so that we can see it through the web browser. Then we can see the logo I’ve just uploaded through the URL here. You may notice that the blob URL was based on the container name and the blob unique name. In the document of the Azure SDK there’s a page for the rule of naming them, but I think the simple rule would be – they must be valid as an URL address. So that you cannot name the container with dot or slash as it will break the ADO.Data Service routing rule. For exmaple if you named the blob container as Account.Logo then it will throw an exception says 400 Bad Request.   Summary In this short entity I covered the simple usage of the blob service to save the images onto Azure. Since the Azure platform does not support the file system we have to migrate our code for reading/writing files to the blob service before deploy it to Azure. In order to reducing this effort Microsoft provided a new approch named Drive, which allows us read and write the NTFS files just likes what we did before. It’s built up on the blob serivce but more properly for files accessing. I will discuss more about it in the next post.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Using the new CSS Analyzer in JavaFX Scene Builder

    - by Jerome Cambon
    As you know, JavaFX provides from the API many properties that you can set to customize or make your components to behave as you want. For instance, for a Button, you can set its font, or its max size.Using Scene Builder, these properties can be explored and modified using the inspector. However, JavaFX also provides many other properties to have a fine grained customization of your components : the css properties. These properties are typically set from a css stylesheet. For instance, you can set a background image on a Button, change the Button corners, etc... Using Scene Builder, until now, you could set a css property using the inspector Style and Stylesheet editors. But you had to go to the JavaFX css documentation to know the css properties that can be applied to a given component. Hopefully, Scene Builder 1.1 added recently a very interesting new feature : the CSS Analyzer.It allows you to explore all the css properties available for a JavaFX component, and helps you to build your css rules. A very simple example : make a Button rounded Let’s take a very simple example:you would like to customize your Buttons to make them rounded. First, enable the CSS Analyzer, using the ‘View->Show CSS Analyzer’ menu. Grow the main window, and the CSS Analyzer to get more room: Then, drop a Button from the Library to the ContentView: the CSS Analyzer is now showing the Button css properties: As you can see, there is a ‘-fx-background-radius’ css property that allow to define the radius of the background (note that you can get the associated css documentation by clicking on the property name). You can then experiment this by setting the Button style property from the inspector: As you can see in the css doc, one can set the same radius for the 4 corners by a simple number. Once the style value is applied, the Button is now rounded, as expected.Look at the CSS Analyzer: the ‘-fx-background-radius’ property has now 2 entries: the default one, and the one we just entered from the Style property. The new value “win”: it overrides the default one, and become the actual value (to highlight this, the cell background becomes blue). Now, you will certainly prefer to apply this new style to all the Buttons of your FXML document, and have a css rule for this.To do this, save you document first, and create a css file in the same directory than the new document.Create an empty css file (e.g. test.css), and attach it the the root AnchorPane, by first selecting the AnchorPane, then using the Stylesheets editor from the inspector: Add the corresponding css rule to your new test.css file, from your preferred editor (Netbeans for me ;-) and save it. .button { -fx-background-radius: 10px;} Now, select your Button and have a look at the CSS Analyzer. As you can see, the Button is inheriting the css rule (since the Button is a child of the AnchorPane), and still have its inline Style. The Inline style “win”, since it has precedence on the stylesheet. The CSS Analyzer columns are displayed by precedence order.Note the small right-arrow icons, that allow to jump to the source of the value (either test.css, or the inspector in this case).Of course, unless you want to set a specific background radius for this particular Button, you can remove the inline Style from the inspector. Changing the color of a TitledPane arrow In some cases, it can be useful to be able to select the inner element you want to style directly from the Content View . Drop a TitledPane to the Content View. Then select from the CSS Analyzer the CSS cursor (the other cursor on the left allow you to come back to ‘standard’ selection), that will allow to select an inner element: height: 62px;" align="LEFT" border="0"> … and select the TitledPane arrow, that will get a yellow background: … and the Styleable Path is updated: To define a new css rule, you can first copy the Styleable path : .. then paste it in your test.css file. Then, add an entry to set the -fx-background-color to red. You should have something like: .titled-pane:expanded .title .arrow-button .arrow { -fx-background-color : red;} As soon as the test.css is saved, the change is taken into account in Scene Builder. You can also use the Styleable Path to discover all the inner elements of TitledPane, by clicking on the arrow icon: More details You can see the CSS Analyzer in action (and many other features) from the Java One BOF: BOF4279 - In-Depth Layout and Styling with the JavaFX Scene Builder presented by my colleague Jean-Francois Denise. On the right hand, click on the Media link to go to the video (streaming) of the presa. The Scene Builder support of CSS starts at 9:20 The CSS Analyzer presentation starts at 12:50

    Read the article

  • Silverlight Firestarter Wrap Up and WCF RIA Services Talk Sample Code

    - by dwahlin
    I had a great time attending and speaking at the Silverlight Firestarter event up in Redmond on December 2, 2010. In addition to getting a chance to hang out with a lot of cool people from Microsoft such as Scott Guthrie, John Papa, Tim Heuer, Brian Goldfarb, John Allwright, David Pugmire, Jesse Liberty, Jeff Handley, Yavor Georgiev, Jossef Goldberg, Mike Cook and many others, I also had a chance to chat with a lot of people attending the event and hear about what projects they’re working on which was awesome. If you didn’t get a chance to look through all of the new features coming in Silverlight 5 check out John Papa’s post on the subject. While at the Silverlight Firestarter event I gave a presentation on WCF RIA Services and wanted to get the code posted since several people have asked when it’d be available. The talk can be viewed by clicking the image below. Code from the talk follows as well as additional links. I had a few people ask about the green bracelet on my left hand since it looks like something you’d get from a waterpark. It was used to get us access down a little hall that led backstage and allowed us to go backstage during the event. I thought it looked kind of dorky but it was required to get through security. Sample Code from My WCF RIA Services Talk (To login to the 2 apps use “user” and “P@ssw0rd”. Make sure to do a rebuild of the projects in Visual Studio before running them.) View All Silverlight Firestarter Talks and Scott Guthrie’s Keynote WCF RIA Services SP1 Beta for Silverlight 4 WCF RIA Services Code Samples (including some SP1 samples) Improved binding support in EntitySet and EntityCollection with SP1 (Kyle McClellan’s Blog) Introducing an MVVM-Friendly DomainDataSource: The DomainCollectionView (Kyle McClellan’s Blog) I’ve had the chance to speak at a lot of conferences but never with as many cameras, streaming capabilities, people watching live and overall hype involved. Over 1000 people registered to attend the conference in person at the Microsoft campus and well over 15,000 to watch it through the live stream.  The event started for me on Tuesday afternoon with a flight up to Seattle from Phoenix. My flight was delayed 1 1/2 hours (I seem to be good at booking delayed flights) so I didn’t get up there until almost 8 PM. John Papa did a tech check at 9 PM that night and I was scheduled for 9:30 PM. We basically plugged in my laptop backstage (amazing number of servers, racks and audio devices back there) and made sure everything showed up properly on the projector and the machines recording the presentation. In addition to a dedicated show director, there were at least 5 tech people back stage and at least that many up in the booth running lights, audio, cameras, and other aspects of the show. I wish I would’ve taken a picture of the backstage setup since it was pretty massive – servers all over the place. I definitely gained a new appreciation for how much work goes into these types of events. Here’s what the room looked like right before my tech check– not real exciting at this point. That’s Yavor Georgiev (who spoke on WCF Services at the Firestarter) in the background. We had plenty of monitors to reference during the presentation. Two monitors for slides (right and left side) and a notes monitor. The 4th monitor showed the time and they’d type in notes to us as we talked (such as “You’re over time!” in my case since I went around 4 minutes over :-)). Wednesday morning I went back on campus at Microsoft and watched John Papa film a few Silverlight TV episodes with Dave Campbell and Ryan Plemons.   Next I had the chance to watch the dry run of the keynote with Scott Guthrie and John Papa. We were all blown away by the demos shown since they were even better than expected. Starting at 1 PM on Wednesday I went over to Building 35 and listened to Yavor Georgiev (WCF Services), Jaime Rodriguez (Windows Phone 7), Jesse Liberty (Data Binding) and Jossef Goldberg and Mike Cook (Silverlight Performance) give their different talks and we all shared feedback with each other which was a lot of fun. Jeff Handley from the RIA Services team came afterwards and listened to me give a dry run of my WCF RIA Services talk. He had some great feedback that I really appreciated getting. That night I hung out with John Papa and Ward Bell and listened to John walk through his keynote demos. I also got a sneak peak of the gift given to Dave Campbell for all his work with Silverlight Cream over the years. It’s a poster signed by all of the key people involved with Silverlight: Thursday morning I got up fairly early to get to the event center by 8 AM for speaker pictures. It was nice and quiet at that point although outside the room there was a huge line of people waiting to get in.     At around 8:30 AM everyone was let in and the main room was filled quickly. Two other overflow rooms in the Microsoft conference center (Building 33) were also filled to capacity. At around 9 AM Scott Guthrie kicked off the event and all the excitement started! From there it was all a blur but it was definitely a lot of fun. All of the sessions for the Silverlight Firestarter were recorded and can be watched here (including the keynote). Corey Schuman, John Papa and I also released 11 lab exercises and associated videos to help people get started with Silverlight. Definitely check them out if you’re interested in learning more! Level 100: Getting Started Lab 01 - WinForms and Silverlight Lab 02 - ASP.NET and Silverlight Lab 03 - XAML and Controls Lab 04 - Data Binding Level 200: Ready for More Lab 05 - Migrating Apps to Out-of-Browser Lab 06 - Great UX with Blend Lab 07 - Web Services and Silverlight Lab 08 - Using WCF RIA Services Level 300: Take me Further Lab 09 - Deep Dive into Out-of-Browser Lab 10 - Silverlight Patterns: Using MVVM Lab 11 - Silverlight and Windows Phone 7

    Read the article

  • Training on Demand Certification Packages for DBAs

    - by Antoinette O'Sullivan
    The demand for Database Administrators continues to grow.*Almost two-thirds of IT hiring managers indicate that they highly value certifications in validatingIT skills and expertise.** * Job satisfaction and DBA work growth rate: CNN Money's 2011 Best Jobs in America survey.** Survey among nearly 1,700 respondents by CompTIA, the nonprofit trade association for the IT industry, cited in Certification Magazine, Feb. 14 th., 2012. Get Certified with Training on DemandAre you an experienced Database professional eager to achieve certification?Is time your most precious resource?Then try our new Training On Demand Certification Value Package with 20% discount. These all-in-one packages give you everything you need to get certified with success: Why Training On Demand:  Expert training from Oracle’s top instructors Sophisticated streaming video recording Available for 90 days, 24 hours a day, 7 days a week White boarding and training labs for hands-on experience Start, stop, pause, jump or rewind sections of the course as needed  Oracle University instructor Q&A  A full-text search leads to the right video fragment in a matter of seconds. Watch this demo to see how it works. Additional Certification resources: Benefits of Oracle Certification Database Certification Paths Available Database Certification Exams Getting certified has never been easier!For assistance contact your local Oracle University Service Desk. Many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database 11g training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL Database will broaden your career path with growing job demand. These Value Packages are also available with the following training formats: In-Class, Live Virtual Class and Self Study: MySQL Database Administration Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand MySQL Database Administrator Certification Value Package View Package View Package View Package View Package MySQL Developer Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand       MySQL Developer Certification Value Package View Package View Package     Oracle Database 10g Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand Oracle Database 10g Administrator Certified Associate Certification Value Package View Package View Package View Package   Oracle Database 10g Administrator Certified Professional Certification Value Package View Package View Package View Package   Oracle Database 11g Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand Oracle Database 11g Administrator Certified Associate Certification Value Package View Package View Package View Package View Package Oracle Database 11g Administrator Certified Professional Certification Value Package View Package View Package View Package View Package Exam Prep Seminar Value Package: Oracle Database Admin 1       View Package Oracle Database 11g Administrator Certified Professional UPGRADE Certification Value Package       View Package Oracle Real Application Clusters 11g and Grid Infrastructure Administraton Certified Expert Certification Value Package       View Package Exam Prep Seminar Value Package: Oracle Database Admin 2        View Package Exam Prep Seminar Value Package: Oracle RAC 11g and Grid Infrastructure Administration       View Package Exam Prep Seminar Value Package: Upgrade Oracle Certified Professional (OCP) to Oracle Database 11g       View Package SQL and PL/SQL Value Packages Your Savings plus get a FREE Retake  save 5% save 20% save 20% save 20%   In Class Edition Live Virtual Class Edition Self-Study Edition Training On Demand Oracle Database Sql Expert Certification Value Package View Package View Package View Package View Package Exam Prep Seminar Value Package: Oracle Database SQL       View Package View our Certification Value Packages Mention this code at the time of booking: E1245 Connect For a full list of MySQL Training courses and events, go to http://oracle.com/education/mysql.

    Read the article

  • Minidlna Directory Issues

    - by Somnambulist
    I've done my searching and can't find an answer to THIS specific issue. I have my minidlna set up and running - but it's not really done properly. First off, when I open the server on my bluray player, all of my movies are listed twice - when they are certainly not saved on my external twice. Second, when I open the server - rather than reading "Movies" "TV" "Music", etc - It just mashes all of my movies, tv, and some other folders all together with no real organization. I never had this problem when I had my Windows set up, so I know it's something configured improperly more-so than my external drive giving me gruff. Here's my minidlna.conf file: # This is the configuration file for the MiniDLNA daemon, a DLNA/UPnP-AV media # server. # # Unless otherwise noted, the commented out options show their default value. # # On Debian, you can also refer to the minidlna.conf(5) man page for # documentation about this file. media_dir=/media/somnambulist/Ghost In You # This option can be specified more than once if you want multiple directories # scanned. # # If you want to restrict a media_dir to a specific content type, you can # prepend the directory name with a letter representing the type (A, P or V), # followed by a comma, as so: # * "A" for audio (eg. media_dir=A,/var/lib/minidlna/music) # * "P" for pictures (eg. media_dir=P,/var/lib/minidlna/pictures) # * "V" for video (eg. media_dir=V,/var/lib/minidlna/videos) # # WARNING: After changing this option, you need to rebuild the database. Either # run minidlna with the '-R' option, or delete the 'files.db' file # from the db_dir directory (see below). # On Debian, you can run, as root, 'service minidlna force-reload' instead. #media_dir=/var/lib/minidlna media_dir=V,/media/somnambulist/Ghost In You/Movies media_dir=V,/media/somnambulist/Ghost In You/TV media_dir=P,/home/somnambulist/Pictures # Path to the directory that should hold the database and album art cache. db_dir=/home/somnambulist/serverart # Path to the directory that should hold the log file. log_dir=/home/somnambulist/serverlog # Minimum level of importance of messages to be logged. # Must be one of "off", "fatal", "error", "warn", "info" or "debug". # "off" turns of logging entirely, "fatal" is the highest level of importance # and "debug" the lowest. #log_level=warn # Use a different container as the root of the directory tree presented to # clients. The possible values are: # * "." - standard container # * "B" - "Browse Directory" # * "M" - "Music" # * "P" - "Pictures" # * "V" - "Video" # if you specify "B" and client device is audio-only then "Music/Folders" will be used as root root_container=B # Network interface(s) to bind to (e.g. eth0), comma delimited. #network_interface= # IPv4 address to listen on (e.g. 192.0.2.1). #listening_ip= # Port number for HTTP traffic (descriptions, SOAP, media transfer). port=8200 # URL presented to clients. # The default is the IP address of the server on port 80. #presentation_url=http://example.com:80 # Name that the DLNA server presents to clients. friendly_name=Somnambulist Media Server # Serial number the server reports to clients. serial=12345678 # Model name the server reports to clients. #model_name=Windows Media Connect compatible (MiniDLNA) # Model number the server reports to clients. model_number=1 # Automatic discovery of new files in the media_dir directory. #inotify=yes # List of file names to look for when searching for album art. Names should be # delimited with a forward slash ("/"). album_art_names=Cover.jpg/cover.jpg/AlbumArtSmall.jpg/albumartsmall.jpg/AlbumArt.jpg/albumart.jpg/Album.jpg/album.jpg/Folder.jpg/folder.jpg/Thumb.jpg/thumb.jpg # Strictly adhere to DLNA standards. # This allows server-side downscaling of very large JPEG images, which may # decrease JPEG serving performance on (at least) Sony DLNA products. #strict_dlna=no # Support for streaming .jpg and .mp3 files to a TiVo supporting HMO. #enable_tivo=no # Notify interval, in seconds. #notify_interval=895 # Path to the MiniSSDPd socket, for MiniSSDPd support. #minissdpdsocket=/run/minissdpd.sock` And here's the error I get in terminal when I run: sudo service minidlna restart sudo service minidlna force-reload Force restart error: Restarting DLNA/UPnP-AV media server minidlna [2013/08/12 21:19:27] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/Movies" not accessible! [Permission denied] [2013/08/12 21:19:27] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/TV" not accessible! [Permission denied] Force-reload error: Restarting DLNA/UPnP-AV media server minidlna [2013/08/12 21:19:46] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/Movies" not accessible! [Permission denied] [2013/08/12 21:19:46] minidlna.c:474: error: Media directory "/media/somnambulist/Ghost In You/TV" not accessible! [Permission denied] rm: cannot remove ‘/home/somnambulist/serverart/files.db’: Permission denied rm: cannot remove ‘/home/somnambulist/serverart/art_cache/media/somnambulist/Ghost In You/Movies/Slumdog Millionaire/Slumdog.Millionaire.Cover.jpg’: Permission denied rm: cannot remove ‘/home/somnambulist/serverart/art_cache/media/somnambulist/Ghost In You/Movies/Zack and Miri Make a Porno/ZackAndMiriMakeAPornoCover.jpg’: Permission denied [2013/08/12 21:19:46] minidlna.c:744: warn: Failed to clean old file cache. [ OK ] I've spent hours on this at this point, read through various files - and even had a friend who is relatively Ubuntu-savvy try to help me via chat - no such luck. Thanks in advance for any help.

    Read the article

  • SQL SERVER – TechEd India 2012 – Content, Speakers and a Lots of Fun

    - by pinaldave
    TechEd is one event which every developers and IT professionals are looking forward to attend. It is opportunity of life time and no matter how many time one gets chance to engage with it, it is never enough. I still remember every single moment of every TechEd I have attended so far. We are less than 100 hours away from TechEd India 2012 event.This event is the one must attend event for every Technology Enthusiast. Fourth time in the row I am going to attend this event and I am equally excited as the first time of the event. There are going to be two very solid SQL Server track this time and I will be attending end of the end both the tracks. Here is my view on each of the 10 sessions. Each session is carefully crafted and leading exeprts from industry will present it. Day 1, March 21, 2012 T-SQL Rediscovered with SQL Server 2012 – This session is going to bring some of the lesser known enhancements that were brought with SQL Server 2012. When I learned that Jacob Sebastian is going to do this session my reaction to this is DEMO, DEMO and DEMO! Jacob spends hours and hours of his time preparing his session and this will be one of those session that I am confident will be delivered over and over through out the next many events. Catapult your data with SQL Server 2012 Integration Services – Praveen is expert story teller and one of the wizard when it is about SQL Server and business intelligence. He is surely going to mesmerize you with some interesting insights on SSIS performance too. Processing Big Data with SQL Server 2012 and Hadoop – There are three sessions on Big Data at TechEd India 2012. Stephen is going to deliver one of the session. Watching Stephen present is always joy and quite entertaining. He shares knowledge with his typical humor which captures ones attention. I wrote about what is BIG DATA in a blog post. SQL Server Misconceptions and Resolutions – I will be presenting this Session along with Vinod Kumar. READ MORE HERE. Securing with ContainedDB in SQL Server 2012 – Pranab is expert when it is about SQL Server and Security. I have seen him presenting and he is indeed very pleasant to watch. A dry subject like security, he makes it much lively. A Contained Database is a database which contains all the necessary settings and metadata, making database easily portable to another server. This database will contain all the necessary details and will not have to depend on any server where it is installed for anything. You can take this database and move it to another server without having any worries. Day 3, March 23, 2012 Peeling SQL Server like an Onion: Internals Demystified – Vinod Kumar has been writing about this extensively on his other blog post. In recent conversation he suggested that he will be creating very exclusive content for this presentation. I know Vinod for long time and have worked with him along many community activities. I am going to pay special attention to the details. I know Vinod has few give-away planned now for attending the session now only if he shares with us. Speed Up – Parallel Processes and unparalleled Performance – Performance tuning is my favorite subject. I will be discussing effect of parallelism on performance in this session. Here me out, there will be lots of quiz questions during this session and if you get the answers correct – you can win some really cool goodies – I Promise! READ MORE HERE. Keep your database available – AlwaysOn – Balmukund is like an army man. He is always ready to show and prove that he has coolest toys in terms of SQL Server and he knows how to keep them running AlwaysON. Availability groups, Listener, Clustering, Failover, Read-Only replica etc all will be demo’ed in this session. This is really heavy but very interesting content not to be missed. Lesser known facts about SQL Server Backup and Restore – Amit Banerjee – this name is known internationally for solving SQL Server problems in 140 characters. He has already blogged about this and this topic is going to be interesting. A successful restore strategy for applications is as good as their last good known backup. I have few difficult questions to ask to Amit and I am very sure that his unique style will entertain people. By the way, his one of the slide may give few in audience a funny heart attack. Top 5 reasons why you want SQL Server 2012 BI – Praveen plans to take a tour of some of the BI enhancements introduced in the new version. Business Insights with SQL Server is a critical building block and this version of SQL Server is no exception. For the matter of the fact, when I saw the demos he was going to show during this session, I felt like that I wish I can set up all of this on my machine. If you miss this session – you will miss one of the most informative session of the day. Also TechEd India 2012 has a Live streaming of some content and this can be watched here. The TechEd Team is planning to have some really good exclusive content in this channel as well. If you spot me, just do not hesitate to come by me and introduce yourself, I want to remember you! Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, SQLServer, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • Aggregating cache data from OCEP in CQL

    - by Manju James
    There are several use cases where OCEP applications need to join stream data with external data, such as data available in a Coherence cache. OCEP’s streaming language, CQL, supports simple cache-key based joins of stream data with data in Coherence (more complex queries will be supported in a future release). However, there are instances where you may need to aggregate the data in Coherence based on input data from a stream. This blog describes a sample that does just that. For our sample, we will use a simplified credit card fraud detection use case. The input to this sample application is a stream of credit card transaction data. The input stream contains information like the credit card ID, transaction time and transaction amount. The purpose of this application is to detect suspicious transactions and send out a warning event. For the sake of simplicity, we will assume that all transactions with amounts greater than $1000 are suspicious. The transaction history is available in a Coherence distributed cache. For every suspicious transaction detected, a warning event must be sent with maximum amount, total amount and total number of transactions over the past 30 days, as shown in the diagram below. Application Input Stream input to the EPN contains events of type CCTransactionEvent. This input has to be joined with the cache with all credit card transactions. The cache is configured in the EPN as shown below: <wlevs:caching-system id="CohCacheSystem" provider="coherence"/> <wlevs:cache id="CCTransactionsCache" value-type="CCTransactionEvent" key-properties="cardID, transactionTime" caching-system="CohCacheSystem"> </wlevs:cache> Application Output The output that must be produced by the application is a fraud warning event. This event is configured in the spring file as shown below. Source for cardHistory property can be seen here. <wlevs:event-type type-name="FraudWarningEvent"> <wlevs:properties type="tuple"> <wlevs:property name="cardID" type="CHAR"/> <wlevs:property name="transactionTime" type="BIGINT"/> <wlevs:property name="transactionAmount" type="DOUBLE"/> <wlevs:property name="cardHistory" type="OBJECT"/> </wlevs:properties </wlevs:event-type> Cache Data Aggregation using Java Cartridge In the output warning event, cardHistory property contains data from the cache aggregated over the past 30 days. To get this information, we use a java cartridge method. This method uses Coherence’s query API on credit card transactions cache to get the required information. Therefore, the java cartridge method requires a reference to the cache. This may be set up by configuring it in the spring context file as shown below: <bean class="com.oracle.cep.ccfraud.CCTransactionsAggregator"> <property name="cache" ref="CCTransactionsCache"/> </bean> This is used by the java class to set a static property: public void setCache(Map cache) { s_cache = (NamedCache) cache; } The code snippet below shows how the total of all the transaction amounts in the past 30 days is computed. Rest of the information required by CardHistory object is calculated in a similar manner. Complete source of this class can be found here. To find out more information about using Coherence's API to query a cache, please refer Coherence Developer’s Guide. public static CreditHistoryData(String cardID) { … Filter filter = QueryHelper.createFilter("cardID = :cardID and transactionTime :transactionTime", map); CardHistoryData history = new CardHistoryData(); Double sum = (Double) s_cache.aggregate(filter, new DoubleSum("getTransactionAmount")); history.setTotalAmount(sum); … return history; } The java cartridge method is used from CQL as seen below: select cardID, transactionTime, transactionAmount, CCTransactionsAggregator.execute(cardID) as cardHistory from inputChannel where transactionAmount1000 This produces a warning event, with history data, for every credit card transaction over $1000. That is all there is to it. The complete source for the sample application, along with the configuration files, is available here. In the sample, I use a simple java bean to load the cache with initial transaction history data. An input adapter is used to create and send transaction events for the input stream.

    Read the article

  • Interview with Ronald Bradford about MySQL Connect

    - by Keith Larson
    Ronald Bradford,  an Oracle ACE Director has been busy working with  database consulting, book writing (EffectiveMySQL) while traveling and speaking around the world in support of MySQL. I was able to take some of his time to get an interview on this thoughts about theMySQL Connect conference. Keith Larson: What where your thoughts when you heard that Oracle was going to provide the community the MySQL Conference ?Ronald Bradford: Oracle has already been providing various different local community events including OTN Tech Days and  MySQL community days. These are great for local regions both in the US and abroad.  In previous years there has been an increase of content at Oracle Open World, however that benefits the Oracle community far more then the MySQL community.  It is good to see that Oracle is realizing the benefit in providing a large scale dedicated event for the MySQL community that includes speakers from the MySQL development teams, invested companies in the ecosystem and other community evangelists.I fully expect a successful event and look forward to hopefully seeing MySQL Connect at the upcoming Brazil and Japan OOW conferences and perhaps an event on the East Coast.Keith Larson: Since you are part of the content committee, what did you think of the submissions that were received during call for papers?Ronald Bradford: There was a large number of quality submissions to the number of available presentation sessions. As with the previous years as a committee member for the annual MySQL conference, there is always a large variety of common cornerstone MySQL features as well as new products and upcoming companies sharing their MySQL experiences. All of the usual major players in the ecosystem will in presenting at MySQL Connect including Facebook, Twitter, Yahoo, Continuent, Percona, Tokutek, Sphinx and Amazon to name a few.  This is ensuring the event will have a large number of quality speakers and a difficult time in choosing what to attend. Keith Larson: What sessions do you look forwarding to attending? Ronald Bradford: As with most quality conferences you can only be in one place at one time, so with multiple tracks per session it is always difficult to decide. The continued work and success with MySQL Cluster, and with a number of sessions I am sure will be popular. The features that interest me the most are around the optimizer, where there are several sessions on new features, and on the importance of backups. There are three presentations in this area to choose from.Keith Larson: Are you going to cover any of the content in your books at your MySQL Connect sessions?Ronald Bradford: I will be giving two presentations at MySQL Connect. The first will include the techniques available for creating better indexes where I will be touching on some aspects of the first Effective MySQL book on Optimizing SQL Statements.  In my second presentation from experiences of managing 500+ AWS MySQL instances, I will be touching on areas including SQL tuning, backup and recovery and scale out with replication.   These are the key topics of the initial books in the Effective MySQL series that focus on performance, scalability and business continuity.  The books however cover a far greater amount of detail then can be presented in a 1 hour session. Keith Larson: What features of MySQL 5.6 do you look forward to the most ?Ronald Bradford: I am very impressed with the optimizer trace feature. The ability to see exposed information is invaluable not just for MySQL 5.6, but to also apply information discerned for optimizing SQL statements in earlier versions of MySQL.  Not everybody understands that it is easy to deploy a MySQL 5.6 slave into an existing topology running an older version if MySQL for evaluation of many new features.  You can use the new mysqlbinlog streaming feature for duplicating master binary logs on an older version with a MySQL 5.6 slave.  The improvements in instrumentation in the Performance Schema are exciting.   However, as with my upcoming Replication Techniques in Depth title, that will be available for sale at MySQL Connect, there are numerous replication features, some long overdue with provide significant management benefits. Crash Save Slaves, Global transaction Identifiers (GTID)  and checksums just to mention a few.Keith Larson: You have been to numerous conferences, what would you recommend for people at the conference? Ronald Bradford: Make the time to meet and introduce yourself to the speakers that cover the topics that most interest you. The MySQL ecosystem has a very strong community.  The relationships you build with presenters, developers and architects in MySQL can be invaluable, however they are created over time. Get to know these people, interact with them over time.  This is the opportunity to learn more then just the content from a 1 hour session. Keith Larson: Any additional tips to handling the long hours ? Ronald Bradford: Conferences can be hard, especially with all the post event drinking.  This is a two day event and I am sure will include additional events on Friday and Saturday night so come well prepared, and leave work behind. Take the time to learn something new.   You can always catchup on sleep later. Keith Larson: Thank you so much for taking some time to do this I look forward to seeing you at the MySQL Connect conference.  Please stay tuned here for more updates on MySQL. 

    Read the article

  • The JavaOne 2012 Sunday Technical Keynote

    - by Janice J. Heiss
    At the JavaOne 2012 Sunday Technical Keynote, held at the Masonic Auditorium, Mark Reinhold, Chief Architect, Java Platform Group, stated that they were going to do things a bit differently--"rather than 20 minutes of SE, and 20 minutes of FX, and 20 minutes of EE, we're going to mix it up a little," he said. "For much of it, we're going to be showing a single application, to show off some of the great work that's been done in the last year, and how Java can scale well--from the cloud all the way down to some very small embedded devices, and how JavaFX scales right along with it."Richard Bair and Jasper Potts from the JavaFX team demonstrated a JavaOne schedule builder application with impressive navigation, animation, pop-overs, and transitions. They noted that the application runs seamlessly on either Windows or Macs, running Java 7. They then ran the same application on an Ubuntu Linux machine--"it just works," said Blair.The JavaFX duo next put the recently released JavaFX Scene Builder through its paces -- dragging and dropping various image assets to build the application's UI, then fine tuning a CSS file for the finished look and feel. Among many other new features, in the past six months, JavaFX has released support for H.264 and HTTP live streaming, "so you can get all the real media playing inside your JavaFX application," said Bair. And in their developer preview builds of JavaFX 8, they've now split the rendering thread from the UI thread, to better take advantage of multi-core architectures.Next, Brian Goetz, Java Language Architect, explored language and library features planned for Java SE 8, including Lambda expressions and better parallel libraries. These feature changes both simplify code and free-up libraries to more effectively use parallelism. "It's currently still a lot of work to convert an application from serial to parallel," noted Goetz.Reinhold had previously boasted of Java scaling down to "small embedded devices," so Blair and Potts next ran their schedule builder application on a small embedded PandaBoard system with an OMAP4 chip set. Connected to a touch screen, the embedded board ran the same JavaFX application previously seen on the desktop systems, but now running on Java SE Embedded. (The systems can be seen and tried at four of the nearby JavaOne hotels.) Bob Vandette, Java Embedded Architect, then displayed a $25 Rasberry Pi ARM-based system running Java SE Embedded, noting the even greater need for the platform independence of Java in such highly varied embedded processor spaces. Reinhold and Vandetta discussed Project Jigsaw, the planned modularization of the Java SE platform, and its deferral from the Java 8 release to Java 9. Reinhold demonstrated the promise of Jigsaw by running a modularized demo version of the earlier schedule builder application on the resource constrained Rasberry Pi system--although the demo gods were not smiling down, and the application ultimately crashed.Reinhold urged developers to become involved in the Java 8 development process--getting the weekly builds, trying out their current code, and trying out the new features:http://openjdk.java.net/projects/jdk8http://openjdk.java.net/projects/jdk8/spechttp://jdk8.java.netFrom there, Arun Gupta explored Java EE. The primary themes of Java EE 7, Gupta stated, will be greater productivity, and HTML 5 functionality (WebSocket, JSON, and HTML 5 forms). Part of the planned productivity increase of the release will come from a reduction in writing boilerplate code--through the widespread use of dependency injection in the platform, along with default data sources and default connection factories. Gupta noted the inclusion of JAX-RS in the web profile, the changes and improvements found in JMS 2.0, as well as enhancements to Java EE 7 in terms of JPA 2.1 and EJB 3.2. GlassFish 4 is the reference implementation of Java EE 7, and currently includes WebSocket, JSON, JAX-RS 2.0, JMS 2.0, and more. The final release is targeted for Q2, 2013. Looking forward to Java EE 8, Gupta explored how the platform will provide multi-tenancy for applications, modularity based on Jigsaw, and cloud architecture. Meanwhile, Project Avatar is the group's incubator project for designing an end-to-end framework for building HTML 5 applications. Santiago Pericas-Geertsen joined Gupta to demonstrate their "Angry Bids" auction/live-bid/chat application using many of the enhancements of Java EE 7, along with an Avatar HTML 5 infrastructure, and running on the GlassFish reference implementation.Finally, Gupta covered Project Easel, an advanced tooling capability in NetBeans for HTML5. John Ceccarelli, NetBeans Engineering Director, joined Gupta to demonstrate creating an HTML 5 project from within NetBeans--formatting the project for both desktop and smartphone implementations. Ceccarelli noted that NetBeans 7.3 beta will be released later this week, and will include support for creating such HTML 5 project types. Gupta directed conference attendees to: http://glassfish.org/javaone2012 for everything about Java EE and GlassFish at JavaOne 2012.

    Read the article

  • Azure Mobile Services: available modules

    - by svdoever
    Azure Mobile Services has documented a set of objects available in your Azure Mobile Services server side scripts at their documentation page Mobile Services server script reference. Although the documented list is a nice list of objects for the common things you want to do, it will be sooner than later that you will look for more functionality to be included in your script, especially with the new provided feature that you can now create your custom API’s. If you use GIT it is now possible to add any NPM module (node package manager module, say the NuGet of the node world), but why include a module if it is already available out of the box. And you can only use GIT with Azure Mobile Services if you are an administrator on your Azure Mobile Service, not if you are a co-administrator (will be solved in the future). Until now I did some trial and error experimentation to test if a certain module was available. This is easiest to do as follows:   Create a custom API, for example named experiment. In this API use the following code: exports.get = function (request, response) { var module = "nonexistingmodule"; var m = require(module); response.send(200, "Module '%s' found.", module); }; You can now test your service with the following request in your browser: https://yourservice.azure-mobile.net/api/experiment If you get the result: {"code":500,"error":"Error: Internal Server Error"} you know that the module does not exist. In your logs you will find the following error: Error in script '/api/experiment.json'. Error: Cannot find module 'nonexistingmodule' [external code] atC:\DWASFiles\Sites\yourservice\VirtualDirectory0\site\wwwroot\App_Data\config\scripts\api\experiment.js:3:13[external code] If you require an existing (undocumented) module like the OAuth module in the following code, you will get success as a result: exports.get = function (request, response) { var module = "oauth"; var m = require(module); response.send(200, "Module '" + module + "' found."); }; If we look at the standard node.js documentation we see an extensive list of modules that can be used from your code. If we look at the list of files available in the Azure Mobile Services platform as documented in the blog post Azure Mobile Services: what files does it consist of? we see a folder node_modules with many more modules are used to build the Azure Mobile Services functionality on, but that can also be utilized from your server side node script code: apn - An interface to the Apple Push Notification service for Node.js. dpush - Send push notifications to Android devices using GCM. mpns - A Node.js interface to the Microsoft Push Notification Service (MPNS) for Windows Phone. wns - Send push notifications to Windows 8 devices using WNS. pusher - Node library for the Pusher server API (see also: http://pusher.com/) azure - Windows Azure Client Library for node. express - Sinatra inspired web development framework. oauth - Library for interacting with OAuth 1.0, 1.0A, 2 and Echo. Provides simplified client access and allows for construction of more complex apis and OAuth providers. request - Simplified HTTP request client. sax - An evented streaming XML parser in JavaScript sendgrid - A NodeJS implementation of the SendGrid Api. sqlserver – In node repository known as msnodesql - Microsoft Driver for Node.js for SQL Server. tripwire - Break out from scripts blocking node.js event loop. underscore - JavaScript's functional programming helper library. underscore.string - String manipulation extensions for Underscore.js javascript library. xml2js - Simple XML to JavaScript object converter. xmlbuilder - An XML builder for node.js. As stated before, many of these modules are used to provide the functionality of Azure Mobile Services platform, and in general should not be used directly. On the other hand, I needed OAuth badly to authenticate to the new v1.1 services of Twitter, and was very happy that a require('oauth') and a few lines of code did the job. Based on the above modules and a lot of code in the other javascript files in the Azure Mobile Services platform a set of global objects is provided that can be used from your server side node.js script code. In future blog posts I will go into more details with respect to how this code is built-up, all starting at the node.js express entry point app.js.

    Read the article

  • Oracle GoldenGate 12c - Leading Enterprise Replication

    - by Doug Reid
    Oracle GoldenGate 12c released  on October 17th and includes several new cutting edge features that firmly establishes GoldenGate's leader position in the data replication space.   In fact, this release more than doubles the performance of data delivery, supports Oracle's new multitenant database feature,  it's more secure, has more options for high availability, and has made great strides to simplify the configuration and deployment of the product.     Read through the press release if you haven't already and do not miss the quote from Cern's Eva Dafonte Perez, regarding Oracle GoldenGate 12c "….performs five times faster compared to previous GoldenGate versions and simplifies the management of a multi-tier environment" There are a variety of new and improved features in the Oracle GoldenGate 12c.  Here are the highlights: Optimized for Oracle Database 12c -  GoldenGate 12c is custom tailored to the unique capabilities of Oracle database 12c and out of the box GoldenGate 12c supports multitenant (pluggable database (PDB)) and non-consolidated deployments of Oracle Database 12c.   The naming convention used by database 12c is now in three parts (PDB-name, schema-name, and object name).  We have made changes to the GoldenGate capture process to support the new naming convention and streamlined the whole process so a single GoldenGate capture process is being used at the container level rather than at each individual PDB.  By having the capture process at the container level resource usage and the number of processes are reduced. To view a conceptual architecture diagram click here. Integrated Delivery for the Oracle Database - Leveraging a lightweight streaming API built exclusively for Oracle GoldenGate 12c, this process distributes load, auto tunes the degree of parallelism, scales better, and delivers blinding rates of changed data delivery to the Oracle database.  One of the goals for Oracle GoldenGate 12c was to reduce IT costs by simplifying the configuration and reduce the time to manage complex infrastructures.  In previous versions of Oracle GoldenGate, customers would split transaction loads by grouping tables into multiple different delivery processes (click here to view the previous method). Each delivery process executed independently and without any interaction or knowledge of other delivery processes.  This setup was complicated to configure and time consuming as the developer needed in-depth knowledge of the source and target schemas and the transaction profile. With GoldenGate 12c and Integrated Delivery we have made it easier to configure and faster to deploy.  To view a conceptual architecture diagram of integrated delivery click here Coordinated Delivery for Non-Oracle Databases - Coordinated Delivery orchestrates high-speed apply processes and simplifies the configuration of GoldenGate for non-Oracle targets. In Oracle GoldenGate 12c a single delivery process is used with multiple threads (click here) and key events, such as primary key updates, event markers, DDL, etc, are coordinated between the various threads to insure that the transactions are applied in the same sequence as they were captured, all while delivery improved performance.  Replication Between On-Premises and Cloud-Based systems. - The trend for business to utilize both on-premises and cloud-based systems is rising and businesses need to replicate data back and forth.   GoldenGate 12c can be configured in a variety of ways to provide real-time replication when unrestricted or restricted (limited ports or HTTP tunneling) networks are between on-premises and cloud-based systems.    Expanded Heterogeneity - It wouldn't be a GoldenGate release without new and improved platform support.   Release 1 includes support for MySQL 5.6 and Sybase 15.7.   Upcoming in the next release GoldenGate, support will be expanded for MS SQL Server, DB2, and Teradata. Tighter Security - Oracle GoldenGate 12c is integrated with the Oracle wallet to shield usernames and passwords using strong encryption and aliases.   Customers accustomed to using the Oracle Wallet with other Oracle products will instantly be familiar with how to use this great new feature Expanded Oracle Application and Technology Support -   GoldenGate can be used along with Oracle Coherence to enable real-time changed data feeds to the Coherence cache using Toplink and the Oracle GoldenGate JMS adapter.     Plus,  Oracle Advanced Customer Services (ACS) now offers a low downtime E-Business Suite platform and database migrations using GoldenGate as the enabling technology.  Keep tuned for more blogs on the new features and the upcoming launch webcast where we will go into these new features in more detail.   In the mean time make sure to read through our white paper "Oracle GoldenGate 12c Release 1 New Features Overview"

    Read the article

  • Windows in StreamInsight: Hopping vs. Snapshot

    - by Roman Schindlauer
    Three weeks ago, we explained the basic concept of windows in StreamInsight: defining sets of events that serve as arguments for set-based operations, like aggregations. Today, we want to discuss the so-called Hopping Windows and compare them with Snapshot Windows. We will compare these two, because they can serve similar purposes with different behaviors; we will discuss the remaining window type, Count Windows, another time. Hopping (and its syntactic-sugar-sister Tumbling) windows are probably the most straightforward windowing concept in StreamInsight. A hopping window is defined by its length, and the offset from one window to the next. They are aligned with some absolute point on the timeline (which can also be given as a parameter to the window) and create sets of events. The diagram below shows an example of a hopping window with length of 1h and hop size (the offset) of 15 minutes, hence creating overlapping windows:   Two aspects in this diagram are important: Since this window is overlapping, an event can fall into more than one windows. If an (interval) event spans a window boundary, its lifetime will be clipped to the window, before it is passed to the set-based operation. That’s the default and currently only available window input policy. (This should only concern you if you are using a time-sensitive user-defined aggregate or operator.) The set-based operation will be applied to each of these sets, yielding a result. This result is: A single scalar value in case of built-in or user-defined aggregates. A subset of the input payloads, in case of the TopK operator. Arbitrary events, when using a user-defined operator. The timestamps of the result are almost always the ones of the windows. Only the user-defined  operator can create new events with timestamps. (However, even these event lifetimes are subject to the window’s output policy, which is currently always to clip to the window end.) Let’s assume we were calculating the sum over some payload field: var result = from window in source.HoppingWindow( TimeSpan.FromHours(1), TimeSpan.FromMinutes(15), HoppingWindowOutputPolicy.ClipToWindowEnd) select new { avg = window.Avg(e => e.Value) }; Now each window is reflected by one result event:   As you can see, the window definition defines the output frequency. No matter how many or few events we got from the input, this hopping window will produce one result every 15 minutes – except for those windows that do not contain any events at all, because StreamInsight window operations are empty-preserving (more about that another time). The “forced” output for every window can become a performance issue if you have a real-time query with many events in a wide group & apply – let me explain: imagine you have a lot of events that you group by and then aggregate within each group – classical streaming pattern. The hopping window produces a result in each group at exactly the same point in time for all groups, since the window boundaries are aligned with the timeline, not with the event timestamps. This means that the query output will become very bursty, delivering the results of all the groups at the same point in time. This becomes especially obvious if the events are long-lasting, spanning multiple windows each, so that the produced result events do not change their value very often. In such a case, a snapshot window can remedy. Snapshot windows are more difficult to explain than hopping windows: they represent those periods in time, when no event changes occur. In other words, if you mark all event start and and times on your timeline, then you are looking at all snapshot window boundaries:   If your events are never overlapping, the snapshot window will not make much sense. It is commonly used together with timestamp modification, which make it a very powerful tool. Or as Allan Mitchell expressed in in a recent tweet: “I used to look at SnapshotWindow() with disdain. Now she is my mistress, the one I turn to in times of trouble and need”. Let’s look at a simple example: I want to compute the average of some value in my events over the last minute. I don’t want this output be produced at fixed intervals, but at soon as it changes (that’s the true event-driven spirit!). The snapshot window will include all currently active event at each point in time, hence we need to extend our original events’ lifetimes into the future: Applying the Snapshot window on these events, it will appear to be “looking back into the past”: If you look at the result produced in this diagram, you can easily prove that, at each point in time, the current event value represents the average of all original input event within the last minute. Here is the LINQ representation of that query, applying the lifetime extension before the snapshot window: var result = from window in source .AlterEventDuration(e => TimeSpan.FromMinutes(1)) .SnapshotWindow(SnapshotWindowOutputPolicy.Clip) select new { avg = window.Avg(e => e.Value) }; With more complex modifications of the event lifetimes you can achieve many more query patterns. For instance “running totals” by keeping the event start times, but snapping their end times to some fixed time, like the end of the day. Each snapshot then “sees” all events that have happened in the respective time period so far. Regards, The StreamInsight Team

    Read the article

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Windows 8/Surface Lunch Event Summary

    - by Tim Murphy
    Today was a big day for Microsoft with two separate launch event.  The first for Windows 8 and all of it’s hardware partners.  The second was specifically to introduce the Microsoft Windows 8 Surface tablet.  Below are some of the take-aways I got from the webcasts. Windows 8 Launch The three general area that Microsoft focused on were the release of the OS itself, the public unveiling of the Windows Store and the new devices available from its hardware partners. The release of the OS focused on the fact that it will be available at mid-night tonight for both new PCs and for upgrades.  I can’t say that this interested me that much since it was already known to most people.  I think what they did show well was how easy the OS really is to use. The Windows Store is also not a new feature to those of us who have been running the pre-release versions of Windows 8 or have owned Windows Phone 7 for the past 2 years.  What was interesting is that the Windows Store launches with more apps available than any other platforms store at their respective launch.  I think this says a lot about how Microsoft focuses on the ability of developers to create software and make it available.  The of course were sure to emphasize that the Windows Store has better monetary terms for developers than its competitors. The also showed off the fact that XBox Music streaming is available for to all Windows 8 user for free.  Couple this with the Bing suite of apps that give you news, weather, sports and finance right out of the box and I think most people will find the environment a joy to use. I think the hardware demo, while quick and furious, really show where Windows shine: CHOICE!  They made a statement that over 1000 devices have been certified for Windows 8.  They showed tablets, laptops, desktops, all-in-ones and convertibles.  Since these devices have industry standard connectors they give a much wider variety of accessories and devices that you can use with them. Steve Balmer then came on stage and tried to see how many times he could use the “magical”.  He focused on how the Windows 8 OS is designed to integrate with SkyDrive, Skype and Outlook.com.  He also enforced that they think Windows 8 is the best choice for the Enterprise when it comes to protecting data and integrating across devices including Windows Phone 8. With that we were left to wait for the second event of the day. Surface Launch The second event of the day started with kids with magnets.  Ok, they were adults, but who doesn’t like playing with magnets.  Steven Sinofsky detached and reattached the Surface keyboard repeatedly, clearly enjoying himself.  It turns out that there are 4 magnets in the cover, 2 for alignment and 2 as connectors. They then went to giving us the details on the display.  The 10.6” display is optically bonded to the case and is optimized to reduce glare.  I think this came through very well in the demonstrations. The properties of the case were also a great selling point.  The VaporMg allowed them to drop the device on stage, on purpose, and continue working.  Of course they had to bring out the skate boards made from Surface devices. “It just has to feel right” was the reason they gave for many of their design decisions from the weight and size of the device to the way the kickstand and camera work together.  While this gave you the feeling that the whole process was trial and error you could tell that a lot of science went into the specs.  This included making sure that the magnets were strong enough to hold the cover on and still have a 3 year old remove the cover without effort. I am glad that they also decided the a USB port would be part of the spec since it give so many options.  They made the point that this allows Surface to leverage over 420 million existing devices.  That works for me. The last feature that I really thought was important was the microSD port.  Begin stuck with the onboard memory has been an aggravation of mine with many of the devices in the market today. I think they did job of really getting the audience to understand why you want this platform and this particular device.  Using personal examples like creating a video of a birthday party and being in it or the fact that the device was being used to live blog the event and control the lights and presentation.  They showed very well that it was not only fun but very capable of getting real work done.  Handing out tablets to the crowd didn’t hurt either.  In the end I really wanted a Surface even though I really have no need for one on a daily basis.  Great job Microsoft! del.icio.us Tags: Windows 8,Win8,Windows 8 Luanch

    Read the article

  • SPARC T4 ??????: SPARC T4 ??????????!!

    - by user13138700
    ?? 2011 ? 9 ?? SPARC T4 CPU ???????? SPARC T4 ????????????????2011??10?????????????????????????? ????????????????????SPARC T4 ?????????????????????????????????????????????????????????? SPARC T4 CPU ???? SPARC T4 ?????????????????????????????????? ??????????????????????4/4, 4/5, 4/6 ? 3???????? Oracle Open World 2012 ???????? Oracle Open World 2012 Tokyo ?? Oracle ?????&????? ??? Oracle Solaris ????????????·????????? SPARC&Solaris ??????????????SPARC&Solaris ????????????????????????????????????????????????????????????????????????? Oracle OpenWorld Tokyo 2012 ???? URL http://www.oracle.com/openworld/jp-ja/index.html ?????? 7264 ??????????????? ????Oracle Open World 2012 Tokyo ?????????????????????????SPARC T4 ????? ????????????????? SPARC T4 ????????? SPARC T3 ????????(S2??)??????????????????????????(S3??)??????????????????? ???????" T " ???????????????(?)?????? SPARC T1/T2/T3 ???????????????????????????(????????)????????????????????????? ?SPARC T4 ????????????????????????????? ?SPARC T4 ???????DB?????????????????????????????? ???????????????? ????????????????????????????????????????????? ???? SPARC T3 ???????????????????????????2???????????? ????????????????????????????????????????????????????? ?????????????? SPARC T4 ????????????????????????????????????SPARC T4 ????????? SPARC T4 ??????????????????????????????????????????? ?????????????? T4 ??????????????????? SPARC ???????????????????????????????????????????????????????????????????&??????????? ?????????????????????????????????????????????????????????Web?????????????DB?????????????????????????????????????? (????????????) ???????????? SPARC T4 ????????????????????????????? < T4 ???????? > ??? SPARC ??(S3??)??? x5??????????????????? x2????????????????????? Crypto (?????)?????????? ?????????????????????????/???????????????? ?????? 1, 2,& 4 ??????????? < T4 ????? ??????? > 8x SPARC S3 ?? (64????/???) 4MB ?? L3 ????? (8???/16???) 8x9 ????? 4x DDR3 ??????????? @6.4Gbps 6x ?????????? @9.6Gbps 2x8 PCIe 2.0 (5GTS) 2x10Gb XAUI ??????? < S3???????????? > ALU : Arithmetic Logic Unit BRU : Branch Logic Unit FGU : Flouting-point Graphics Unit IRF : Integer Register File FRF : Flouting-point Register File WRF : Working Register File MMU : Memory Management Unit LSU : Load Store Unit Crypto(SPU) : Streaming Processing Unit TRU : Trap Logic Unit < S3????????? > ????? 8????/?? ?????? Out-of-Order ?? 16???????????????? ????????????? ???????????? ??????? ????????? 64???? ITLB ? 128???? DTLB 64KB 4??? L1 ?????????????? 128KB 8??? ???? L2 ????? < T4 ???????? vs T3 ???????? > T4 ????????????? Out-Of-Order ???? Pick ???????? In-Order ?? Pick ?????? Commit ??????? Out-Of-Order ?? Commit ?????? In-Order ?? < T4 ?????????? > ???????????vs????????????????????????????? ????????Active??????????????????? ???????????????????????? ??????????????????? < T4vsT1/T2/T3 ??????? > SPARC T4 ???? T3????????Web??????????? DB?????????????????????????????? ????????????????????SPARC T4 ?????&Solaris ?????????????(????????)??????????????????????????????????????????????????????????!!? ????Oracle Open World 2012 Tokyo ????????????????SPARC T4 ?????????????????????? 4/4, 4/5, 4/6 ?3????????????????????????????????????????????????????????????????????????????????????? ????????????????? URL http://www.oracle.com/openworld/jp-ja/exhibit/index.html

    Read the article

  • SQL Server 2008: FileStream Insertion Failure w/ .NET 3.5SP1

    - by James Alexander
    I've configured a db w/ a FileStream group and have a table w/ File type on it. When attempting to insert a streamed file and after I create the table row, my query to read the filepath out and the buffer returns a null file path. I can't seem to figure out why though. Here is the table creation script: /****** Object: Table [dbo].[JobInstanceFile] Script Date: 03/22/2010 18:05:36 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[JobInstanceFile]( [JobInstanceFileId] [int] IDENTITY(1,1) NOT NULL, [JobInstanceId] [int] NOT NULL, [File] [varbinary](max) FILESTREAM NULL, [FileId] [uniqueidentifier] ROWGUIDCOL NOT NULL, [Created] [datetime] NOT NULL, CONSTRAINT [PK_JobInstanceFile] PRIMARY KEY CLUSTERED ( [JobInstanceFileId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] FILESTREAM_ON [JobInstanceFilesGroup], UNIQUE NONCLUSTERED ( [FileId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] FILESTREAM_ON [JobInstanceFilesGroup] GO SET ANSI_PADDING OFF GO ALTER TABLE [dbo].[JobInstanceFile] ADD DEFAULT (newid()) FOR [FileId] GO Here's my proc I call to create the row before streaming the file: /****** Object: StoredProcedure [dbo].[JobInstanceFileCreate] Script Date: 03/22/2010 18:06:23 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO create proc [dbo].[JobInstanceFileCreate] @JobInstanceId int, @Created datetime as insert into JobInstanceFile (JobInstanceId, FileId, Created) values (@JobInstanceId, newid(), @Created) select scope_identity() GO And lastly, here's the code I'm using: public int CreateJobInstanceFile(int jobInstanceId, string filePath) { using (var connection = new SqlConnection(ConfigurationManager.ConnectionStrings["ConsumerMarketingStoreFiles"].ConnectionString)) using (var fileStream = new FileStream(filePath, FileMode.Open)) { connection.Open(); var tran = connection.BeginTransaction(IsolationLevel.ReadCommitted); try { //create the JobInstanceFile instance var command = new SqlCommand("JobInstanceFileCreate", connection) { Transaction = tran }; command.CommandType = CommandType.StoredProcedure; command.Parameters.AddWithValue("@JobInstanceId", jobInstanceId); command.Parameters.AddWithValue("@Created", DateTime.Now); int jobInstanceFileId = Convert.ToInt32(command.ExecuteScalar()); //read out the filestream transaction context to stream the file for storage command.CommandText = "select [File].PathName(), GET_FILESTREAM_TRANSACTION_CONTEXT() from JobInstanceFile where JobInstanceFileId = @JobInstanceFileId"; command.CommandType = CommandType.Text; command.Parameters.AddWithValue("@JobInstanceFileId", jobInstanceFileId); using (SqlDataReader dr = command.ExecuteReader()) { dr.Read(); //get the file path we're writing out to string writePath = dr.GetString(0); using (var writeStream = new SqlFileStream(writePath, (byte[])dr.GetValue(1), FileAccess.ReadWrite)) { //copy from one stream to another byte[] bytes = new byte[65536]; int numBytes; while ((numBytes = fileStream.Read(bytes, 0, 65536)) 0) writeStream.Write(bytes, 0, numBytes); } } tran.Commit(); return jobInstanceFileId; } catch (Exception e) { tran.Rollback(); throw e; } } } Can someone please let me know what I'm doing wrong. In the code, the following expression is returning null for the file path and shouldn't be: //get the file path we're writing out to string writePath = dr.GetString(0); The server is different then the computer the code is running on but the necessary shares appear to be in order and I have also run the following: EXEC sp_configure filestream_access_level, 2 Any help would be greatly appreciated. Thanks!

    Read the article

  • Objective-c - How to serialize audio file into small packets that can be played?

    - by vfn
    Hi there, So, I would like to get a sound file and convert it in packets, and send it to another computer. I would like that the other computer be able to play the packets as they arrive. I am using AVAudioPlayer to try to play this packets, but I couldn't find a proper way to serialize the data on the peer1 that the peer2 can play. The scenario is, peer1 has a audio file, split the audio file in many small packets, put them on a NSData and send them to peer2. Peer 2 receive the packets and play one by one, as they arrive. Does anyone have know how to do this? or even if it is possible? EDIT: Here it is some piece of code to illustrate what I would like to achieve. // This code is part of the peer1, the one who sends the data - (void)sendData { int packetId = 0; NSString *soundFilePath = [[NSBundle mainBundle] pathForResource:@"myAudioFile" ofType:@"wav"]; NSData *soundData = [[NSData alloc] initWithContentsOfFile:soundFilePath]; NSMutableArray *arraySoundData = [[NSMutableArray alloc] init]; // Spliting the audio in 2 pieces // This is only an illustration // The idea is to split the data into multiple pieces // dependin on the size of the file to be sent NSRange soundRange; soundRange.length = [soundData length]/2; soundRange.location = 0; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; soundRange.length = [soundData length]/2; soundRange.location = [soundData length]/2; [arraySoundData addObject:[soundData subdataWithRange:soundRange]]; for (int i=0; i // This is the code on peer2 that would receive an play the piece of audio on each packet - (void) receiveData:(NSData *)data { NSKeyedUnarchiver* unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:data]; if ([unarchiver containsValueForKey:PACKET_ID]) NSLog(@"DECODED PACKET_ID: %i", [unarchiver decodeIntForKey:PACKET_ID]); if ([unarchiver containsValueForKey:PACKET_SOUND_DATA]) { NSLog(@"DECODED sound"); NSData *sound = (NSData *)[unarchiver decodeObjectForKey:PACKET_SOUND_DATA]; if (sound == nil) { NSLog(@"sound is nil!"); } else { NSLog(@"sound is not nil!"); AVAudioPlayer *audioPlayer = [AVAudioPlayer alloc]; if ([audioPlayer initWithData:sound error:nil]) { [audioPlayer prepareToPlay]; [audioPlayer play]; } else { [audioPlayer release]; NSLog(@"Player couldn't load data"); } } } [unarchiver release]; } So, here is what I am trying to achieve...so, what I really need to know is how to create the packets, so peer2 can play the audio. It would be a kind of streaming. Yes, for now I am not worried about the order that the packet are received or played...I only need to get the sound sliced and them be able to play each piece, each slice, without need to wait for the whole file be received by peer2. Thanks!

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66  | Next Page >