Search Results

Search found 16557 results on 663 pages for 'graph library'.

Page 331/663 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • Installing Matlab on ubuntu 12.04 32 bits

    - by Amir
    I have been trying to install Matlab2012a, matlab2012b and Matlab2013a for like 4 hours, triedto fix my prospective errors regarding the posts 2012a, Ubuntu-Matlab Documentation and Matlab-central. But either i am recieving an error while the installation GUI pops-up with the error : The application encountered an unexpected error and needs to close. You may want to try re-installing your product(s). More information can be found at /tmp/mathworks_amir.log On the other hand for 2012a. and the errors for 2012b and 2013a is : `Installing ... Exception in thread "main" com.google.inject.ProvisionException: Guice provision errors: 1) Error in custom provider, java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at com.mathworks.wizard.WizardModule.provideDisplayProperties(WizardModule.java:60) while locating com.mathworks.instutil.DisplayProperties at com.mathworks.wizard.ui.components.ComponentsModule.providePaintStrategy(ComponentsModule.java:76) while locating com.mathworks.wizard.ui.components.PaintStrategy for parameter 4 at com.mathworks.wizard.ui.components.SwingComponentFactoryImpl.(SwingComponentFactoryImpl.java:110) while locating com.mathworks.wizard.ui.components.SwingComponentFactoryImpl while locating com.mathworks.wizard.ui.components.SwingComponentFactory for parameter 1 at com.mathworks.wizard.ui.WizardUIImpl.(WizardUIImpl.java:65) while locating com.mathworks.wizard.ui.WizardUIImpl while locating com.mathworks.wizard.ui.WizardUI annotated with @com.google.inject.name.Named(value=BaseWizardUI) at com.mathworks.wizard.ui.UIModule.provideWizardUI(UIModule.java:50) while locating com.mathworks.wizard.ui.WizardUI for parameter 0 at com.mathworks.wizard.ExceptionHandlerImpl.(ExceptionHandlerImpl.java:22) while locating com.mathworks.wizard.ExceptionHandlerImpl while locating com.mathworks.wizard.ExceptionHandler 1 error at com.google.inject.InjectorImpl$4.get(InjectorImpl.java:767) at com.google.inject.InjectorImpl.getInstance(InjectorImpl.java:793) at com.mathworks.wizard.WizardLauncher.startWizard(WizardLauncher.java:160) at com.mathworks.wizard.WizardLauncher.start(WizardLauncher.java:75) at com.mathworks.wizard.AbstractLauncher.launch(AbstractLauncher.java:27) at com.mathworks.wizard.AbstractLauncher.launchStandalone(AbstractLauncher.java:18) at com.mathworks.professionalinstaller.Launcher.main(Launcher.java:21) Caused by: java.lang.RuntimeException: java.lang.reflect.InvocationTargetException at com.google.inject.internal.ProviderMethod.get(ProviderMethod.java:106) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.InjectorImpl$4$1.call(InjectorImpl.java:758) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.InjectorImpl$4.get(InjectorImpl.java:754) at com.google.inject.spi.ProviderLookup$1.get(ProviderLookup.java:89) at com.google.inject.spi.ProviderLookup$1.get(ProviderLookup.java:89) at com.google.inject.internal.ProviderMethod.get(ProviderMethod.java:95) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42) at com.google.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66) at com.google.inject.ConstructorInjector.construct(ConstructorInjector.java:84) at com.google.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:111) at com.google.inject.FactoryProxy.get(FactoryProxy.java:56) at com.google.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42) at com.google.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66) at com.google.inject.ConstructorInjector.construct(ConstructorInjector.java:84) at com.google.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:111) at com.google.inject.FactoryProxy.get(FactoryProxy.java:56) at com.google.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42) at com.google.inject.Scopes$1$1.get(Scopes.java:54) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.InjectorImpl$4$1.call(InjectorImpl.java:758) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.InjectorImpl$4.get(InjectorImpl.java:754) at com.google.inject.spi.ProviderLookup$1.get(ProviderLookup.java:89) at com.google.inject.spi.ProviderLookup$1.get(ProviderLookup.java:89) at com.google.inject.internal.ProviderMethod.get(ProviderMethod.java:95) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42) at com.google.inject.Scopes$1$1.get(Scopes.java:54) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.SingleParameterInjector.inject(SingleParameterInjector.java:42) at com.google.inject.SingleParameterInjector.getAll(SingleParameterInjector.java:66) at com.google.inject.ConstructorInjector.construct(ConstructorInjector.java:84) at com.google.inject.ConstructorBindingImpl$Factory.get(ConstructorBindingImpl.java:111) at com.google.inject.FactoryProxy.get(FactoryProxy.java:56) at com.google.inject.ProviderToInternalFactoryAdapter$1.call(ProviderToInternalFactoryAdapter.java:45) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:811) at com.google.inject.ProviderToInternalFactoryAdapter.get(ProviderToInternalFactoryAdapter.java:42) at com.google.inject.Scopes$1$1.get(Scopes.java:54) at com.google.inject.InternalFactoryToProviderAdapter.get(InternalFactoryToProviderAdapter.java:48) at com.google.inject.InjectorImpl$4$1.call(InjectorImpl.java:758) at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:804) at com.google.inject.InjectorImpl$4.get(InjectorImpl.java:754) ... 6 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.google.inject.internal.ProviderMethod.get(ProviderMethod.java:101) ... 54 more Caused by: com.mathworks.instutil.JNIException: java.lang.UnsatisfiedLinkError: Can't load library: /tmp/mathworks_7417/bin/glnxa64/libinstutil.so at com.mathworks.instutil.NativeUtility.loadNativeLibrary(NativeUtility.java:39) at com.mathworks.instutil.NativeUtility.(NativeUtility.java:24) at com.mathworks.instutil.DisplayPropertiesImpl.(DisplayPropertiesImpl.java:10) at com.mathworks.wizard.WizardModule.provideDisplayProperties(WizardModule.java:67) ... 59 more Caused by: java.lang.UnsatisfiedLinkError: Can't load library: /tmp/mathworks_7417/bin/glnxa64/libinstutil.so at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1842) at java.lang.Runtime.load0(Runtime.java:795) at java.lang.System.load(System.java:1061) at com.mathworks.instutil.NativeUtility.loadNativeLibrary(NativeUtility.java:37) ... 62 more Finished ` I have tried to 1- re-install java run-time 6 and then 7. 2- pass the java-path to the install with : -javadir 3- use the force to install on 32 bits as : sh install -glnx86 -v -javadir /usr/lib/jvm/java-7-openjdk-i386/jre But it seems none of them have worked so far. any ideas ??

    Read the article

  • Migration from Exchange to BPOS - Microsoft Assessment and Planning (MAP) Toolkit Link

    - by Harish Pavithran
    The Microsoft Assessment and Planning (MAP) Toolkit is an agentless toolkit that finds computers on a network and performs a detailed inventory of the computers using Windows Management Instrumentation (WMI) and the Remote Registry Service. The data and analysis provided by this toolkit can significantly simplify the planning process for migrating to Windows® 7, Windows Vista®, Microsoft Office 2007, Windows Server® 2008 R2, Windows Server 2008, Hyper-V, Microsoft Application Virtualization, Microsoft SQL Server 2008, and Forefront® Client Security and Network Access Protection. Assessments for Windows Server 2008 R2, Windows Server 2008, Windows 7, and Windows Vista include device driver availability as well as recommendations for hardware upgrades. If you are interested in server virtualization planning, MAP provides the ability to gather performance metrics from computers you are considering for virtualization and a feature to model a library of potential host hardware and storage configurations. This information can be used to quickly perform "what-if" analysis using Hyper-V and Microsoft Virtual Server 2005 R2 as virtualization platforms. http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=67240b76-3148-4e49-943d-4d9ea7f77730

    Read the article

  • Automatically Create Your Project&rsquo;s NuGet Package Every Time It Builds Via NuGet

    - by deadlydog
    Originally posted on: http://geekswithblogs.net/deadlydog/archive/2013/06/22/automatically-create-your-projectrsquos-nuget-package-every-time-it-builds.aspxSo you’ve got a super awesome library/assembly that you want to share with others, but you’re too lazy to actually use NuGet to package it up and upload it to the gallery; or maybe you don’t know how to create a NuGet package and don’t have the time or desire to learn.  Well, my friends, now this can all be handled for you automatically. Read more at http://blog.danskingdom.com/automatically-create-your-projects-nuget-package-every-time-it-builds-via-nuget/

    Read the article

  • Forcing an External Activation with Service Broker

    - by Davide Mauri
    In these last days I’ve been working quite a lot with Service Broker, a technology I’m really happy to work with, since it can give a lot of satisfaction. The scale-out solution one can easily build is simply astonishing. I’m helping a company to build a very scalable and – yet almost inexpensive – invoicing system that has to be able to scale out using commodity hardware. To offload the work from the main server to satellite “compute nodes” (yes, I’ve borrowed this term from PDW) we’re using Service Broker and the External Activator application available in the SQL Server Feature Pack. For those who are not used to work with SSB, the External Activation is a feature that allows you to intercept the arrival of a message in a queue right from your application code. http://msdn.microsoft.com/en-us/library/ms171617.aspx (Look for “Event-Based Activation”) In order to make life even more easier, Microsoft released the External Activation application that saves you even from writing even this code. http://blogs.msdn.com/b/sql_service_broker/archive/tags/external+activator/ The External Activator application can be configured to execute your own application so that each time a message – an invoice in my case – arrives in the target queue, the invoking application is executed and the invoice is calculated. The very nice feature of External Activator is that it can automatically execute as many configured application in order to process as many messages as your system can handle.  This also a lot of create a scale-out solution, leaving to the developer only a fraction of the problems that usually came with asynchronous programming. Developers are also shielded from Service Broker since everything can be encapsulated in Stored Procedures, so that – for them – developing such scale-out asynchronous solution is not much more complex than just executing a bunch of Stored Procedures. Now, if everything works correctly, you don’t have to bother of anything else. You put messages in the queue and your application, invoked by the External Activator, process them. But what happen if for some reason your application fails to process the messages. For examples, it crashes? The message is safe in the queue so you just need to process it again. But your application is invoked by the External Activator application, so now the question is, how do you wake up that app? Service Broker will engage the activation process only if certain conditions are met: http://msdn.microsoft.com/en-us/library/ms171601.aspx But how we can invoke the activation process manually, without having to wait for another message to arrive (the arrival of a new message is a condition that can fire the activation process)? The “trick” is to do manually with the activation process does: sending a system message to a queue in charge of handling External Activation messages: declare @conversationHandle uniqueidentifier; declare @n xml = N' <EVENT_INSTANCE>   <EventType>QUEUE_ACTIVATION</EventType>   <PostTime>' + CONVERT(CHAR(24),GETDATE(),126) + '</PostTime>   <SPID>' + CAST(@@SPID AS VARCHAR(9)) + '</SPID>   <ServerName>[your_server_name]</ServerName>   <LoginName>[your_login_name]</LoginName>   <UserName>[your_user_name]</UserName>   <DatabaseName>[your_database_name]</DatabaseName>   <SchemaName>[your_queue_schema_name]</SchemaName>   <ObjectName>[your_queue_name]</ObjectName>   <ObjectType>QUEUE</ObjectType> </EVENT_INSTANCE>' begin dialog conversation     @conversationHandle from service        [<your_initiator_service_name>] to service          '<your_event_notification_service>' on contract         [http://schemas.microsoft.com/SQL/Notifications/PostEventNotification] with     encryption = off,     lifetime = 6000 ; send on conversation     @conversationHandle message type     [http://schemas.microsoft.com/SQL/Notifications/EventNotification] (@n) ;     end conversation @conversationHandle; That’s it! Put the code in a Stored Procedure and you can add to your application a button that says “Force Queue Processing” (or something similar) in order to start the activation process whenever you need it (which should not occur too frequently but it may happen). PS I know that the “fire-and-forget” (ending the conversation without waiting for an answer) technique is not a best practice, but in this case I don’t see how it can hurts so I decided to stay very close to the KISS principle []

    Read the article

  • Comparing LINQ to SQL vs the classic SqlCommand

    tweetmeme_url = 'http://alpascual.com/blog/comparing-linq-to-sql-vs-the-classic-sqlcommand/';tweetmeme_source = 'alpascual';When you are coming from using SqlCommand and SqlConnection is difficult to move to another library for your database needs. For those people still in the limbo to make the decision to move to another DAL, here is a comparison to help you see the light or to move away for ever.   How to do a select query using SqlCommand: 1: SqlConnection myConnection = new...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Comparing LINQ to SQL vs the classic SqlCommand

    tweetmeme_url = 'http://alpascual.com/blog/comparing-linq-to-sql-vs-the-classic-sqlcommand/';tweetmeme_source = 'alpascual';When you are coming from using SqlCommand and SqlConnection is difficult to move to another library for your database needs. For those people still in the limbo to make the decision to move to another DAL, here is a comparison to help you see the light or to move away for ever.   How to do a select query using SqlCommand: 1: SqlConnection myConnection = new...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Integrating Twitter Into An ASP.NET Website Using OAuth

    Earlier this year I wrote an article about <a href="http://www.twitterizer.net/">Twitterizer</a>, an open-source .NET library that can be used to integrate your application with <a href="http://twitter.com/">Twitter</a>. Using Twitterizer you can allow your visitors to post tweets, view their timeline, and much more, all without leaving your website. The original article, <a href="http://www.4guysfromrolla.com/articles/021710-1.aspx">Integrating Twitter Into An ASP.NET Website</a>, showed how to post tweets and view a timeline to a particular Twitter account using Twitterizer 1.0. To post a tweet to a specific account, Twitterizer 1.0 uses <i>basic authentication</i>. Basic authentication is a very simple

    Read the article

  • Azure &ndash; Part 6 &ndash; Blob Storage Service

    - by Shaun
    When migrate your application onto the Azure one of the biggest concern would be the external files. In the original way we understood and ensure which machine and folder our application (website or web service) is located in. So that we can use the MapPath or some other methods to read and write the external files for example the images, text files or the xml files, etc. But things have been changed when we deploy them on Azure. Azure is not a server, or a single machine, it’s a set of virtual server machine running under the Azure OS. And even worse, your application might be moved between thses machines. So it’s impossible to read or write the external files on Azure. In order to resolve this issue the Windows Azure provides another storage serviec – Blob, for us. Different to the table service, the blob serivce is to be used to store text and binary data rather than the structured data. It provides two types of blobs: Block Blobs and Page Blobs. Block Blobs are optimized for streaming. They are comprised of blocks, each of which is identified by a block ID and each block can be a maximum of 4 MB in size. Page Blobs are are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob. They are a collection of pages. The maximum size for a page blob is 1 TB.   In the managed library the Azure SDK allows us to communicate with the blobs through these classes CloudBlobClient, CloudBlobContainer, CloudBlockBlob and the CloudPageBlob. Similar with the table service managed library, the CloudBlobClient allows us to reach the blob service by passing our storage account information and also responsible for creating the blob container is not exist. Then from the CloudBlobContainer we can save or load the block blobs and page blobs into the CloudBlockBlob and the CloudPageBlob classes.   Let’s improve our exmaple in the previous posts – add a service method allows the user to upload the logo image. In the server side I created a method name UploadLogo with 2 parameters: email and image. Then I created the storage account from the config file. I also add the validation to ensure that the email passed in is valid. 1: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 2: var accountContext = new DynamicDataContext<Account>(storageAccount); 3:  4: // validation 5: var accountNumber = accountContext.Load() 6: .Where(a => a.Email == email) 7: .ToList() 8: .Count; 9: if (accountNumber <= 0) 10: { 11: throw new ApplicationException(string.Format("Cannot find the account with the email {0}.", email)); 12: } Then there are three steps for saving the image into the blob service. First alike the table service I created the container with a unique name and create it if it’s not exist. 1: // create the blob container for account logos if not exist 2: CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient(); 3: CloudBlobContainer container = blobStorage.GetContainerReference("account-logo"); 4: container.CreateIfNotExist(); Then, since in this example I will just send the blob access URL back to the client so I need to open the read permission on that container. 1: // configure blob container for public access 2: BlobContainerPermissions permissions = container.GetPermissions(); 3: permissions.PublicAccess = BlobContainerPublicAccessType.Container; 4: container.SetPermissions(permissions); And at the end I combine the blob resource name from the input file name and Guid, and then save it to the block blob by using the UploadByteArray method. Finally I returned the URL of this blob back to the client side. 1: // save the blob into the blob service 2: string uniqueBlobName = string.Format("{0}_{1}.jpg", email, Guid.NewGuid().ToString()); 3: CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName); 4: blob.UploadByteArray(image); 5:  6: return blob.Uri.ToString(); Let’s update a bit on the client side application and see the result. Here I just use my simple console application to let the user input the email and the file name of the image. If it’s OK it will show the URL of the blob on the server side so that we can see it through the web browser. Then we can see the logo I’ve just uploaded through the URL here. You may notice that the blob URL was based on the container name and the blob unique name. In the document of the Azure SDK there’s a page for the rule of naming them, but I think the simple rule would be – they must be valid as an URL address. So that you cannot name the container with dot or slash as it will break the ADO.Data Service routing rule. For exmaple if you named the blob container as Account.Logo then it will throw an exception says 400 Bad Request.   Summary In this short entity I covered the simple usage of the blob service to save the images onto Azure. Since the Azure platform does not support the file system we have to migrate our code for reading/writing files to the blob service before deploy it to Azure. In order to reducing this effort Microsoft provided a new approch named Drive, which allows us read and write the NTFS files just likes what we did before. It’s built up on the blob serivce but more properly for files accessing. I will discuss more about it in the next post.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • How do you organize your projects?

    - by Sergio Tapia
    Do you have any particular style of organizing projects? For example, currently I'm creating a project for a couple of schools here in Bolivia, this is how I organized it: TutoMentor (Solution) TutoMentor.UI (Winforms project) TutoMentor.Data (Class library project) How exactly do you organize your project? Do you have an example of something you organized and are proud of? Can you share a screenshot of the Solution pane? In the UI area of my application, I'm having trouble deciding on a good schema to organize different forms and where they belong. Edit: What about organizing different forms in the .UI project? Where/how should I group different form? Putting them all in root level of the project is a bad idea.

    Read the article

  • An Hour With Bill Buxton MIX10

    After spending a couple of hours with Rowan Simpson yesterday afternoon I found myself continually coming back to some of the things that Bill Buxton talked about in his hour Q&A at MIX10 in Las Vegas. Dont have Silverlight? Download the video in WMV, WMV (High) or MP4 format. At the more theoretical level, Bill discusses technology as a human prosthesis, but he favours metaphors that are as far away from technology as possible. The Seattle Public Library and software building....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Megjelent a Berkeley DB 11gR2 verziója

    - by Lajos Sárecz
    Kedden jelent meg az Oracle Berkeley DB legújabb, 11gR2 verziója. A Berkeley DB a piacvezeto nyílt forráskódú beágyazható adatbázis-kezelo. Mivel a Berkeley DB egy library formájában érheto el, így közvetlenül az alkalmazásba linkelheto, ennek köszönheto a rendkívül nagy teljesítmény és a zéró adminisztráció igény. Az új verzió újdonságai: - SQLite támogatás - JDBC és ODBC kapcsolat támogatása - Android platform támogatása A közelmúltban írtam az Oracle Lite új verziójáról is, amely ugyancsak támogatja az SQLite-ot. Nem véletlen a hasonlóság, szándékos cél volt a fejlesztok részérol hogy mostantól az Oracle Database Lite Mobile Server egyszerubben szinkronizálható lesz Oracle Berkeley DB mobil alkalmazásokkal. Az új verzió 2010 március 31-tol lesz letöltheto.

    Read the article

  • "Unable to open MRTG log file" error with nagios and mrtg

    - by Simone Magnaschi
    We have a strange issue with our setup of icinga / nagios and mrtg. Icinga is working great and has no problem, it can monitor basically everything without issues. We setup mrtg to gather bandwith data from our routers and switches. MRTG is working fine: it stores the log data in the /var/www/mrtg/ directory and displays the graph data via web. We assume so MRTG is doing great. We tried to setup bandwidth checks in nagios: define service{ use generic-service ; Inherit values from a template host_name zywall-agora service_description ZYWALL AGORA TRAFFICO check_command check_local_mrtgtraf!/var/www/mrtg/x.x.x.x_2.log!AVG!1000000,2000000!5000000,5000000!1000 check_interval 1 ; Check the service every 1 minute under normal conditions retry_interval 1 ; Re-check every minute until its final/hard state is determined } Where /var/www/mrtg/x.x.x.x_2.log is the correct log path file. We keep on getting Unable to open MRTG log file error in the test result in icinga web interface. We tried everything: give ownership to user nagios or icinga to the log file give chmod 777 to the file try to copy the file in another directory and give it full permission Same error. The strange thing is that if we use the command that nagios generate in a bash session the command works like a charm: /usr/lib64/nagios/plugins/check_mrtgtraf -F /var/www/mrtg/x.x.x.x_2.log -a AVG -w 10,20 -c 5000000,5000000 -e 10 Result: Traffic WARNING - Avg. In = 17.9 KB/s, Avg. Out = 5.0 KB/s|in=17.877930KB/s;10.000000;5000000.000000;0.000000 out=5.000000KB/s;20.000000;5000000.000000;0.000000 We ran that command line as root, as user nagios and as user icinga and all three worked ok. We thought that the command that nagios perform maybe has something wrong in it, so we debugged nagios but we found out that the generated command from nagios is the same as above. Searching on google for these kind of problem returns only issues of systems where mrtg is not installed or issues with the wrong path to the log file, but these seems not to be our case. We are stuck, can somebody help?

    Read the article

  • How do you uninstall wine 1.5?

    - by jeff
    I installed Wine through the terminal using these commands: sudo add-apt-repository ppa:ubuntu-wine/ppa sudo apt-get update sudo apt-get install wine1.5 I know that I've removed at least part of wine. I removed the .wine folder in my home folder and I managed to remove the repository via this command: sudo apt-add-repository --remove ppa:ubuntu-wine/ppa Now when I try to install wine 1.4 via Ubuntu software center, it tells me I must remove these items to install wine: Microsoft windows compatibility layer (binary emulator and library) wine1.5 Microsoft windows compatibility layer (64-bit support) wine1.5-amd64 Microsoft windows compatibility layer (32-bit support) wine1.5-i386:i386 I've already tried this command: sudo apt-get --purge remove wine but it said that wine wasn't installed. Some assistance would be appreciated.

    Read the article

  • Is the “jQuery programming style” a kind of Reactive programming?

    - by Peter Krauss
    jQuery is a Javascript library and framework, but when we are programming with jQuery into DOM problems/solutions, we can practice a style quite different of programming... We can read about jQuery at Wikipedia, The set of jQuery core features — DOM element selections, traversal and manipulation —, enabled by its selector engine (...), created a new "programming style", fusing algorithms and DOM-data-structures This question is similar to the "subquestion-3" of this question but not so generic. The focus here is about this new kind of "programming style"... So, the question: Is the "jQuery programming style in DOM context" a new paradign? Or it is more one example of reactive programming (not "cell-oriented" but "DOM-node oriented") or another one? We have no "standard taxonomy of paradigms", so, please, in your answer, indicate also your "best choice for Wikipedia Paradign". Example: if you understand that "jQuery programming DOM" is like "awk filtering data", your choice can be event-driven.

    Read the article

  • How to read/write cookies in asp.net

    - by SAMIR BHOGAYTA
    Writing Cookies Response.Cookies["userName"].Value = "patrick"; Response.Cookies["userName"].Expires = DateTime.Now.AddDays(1); HttpCookie aCookie = new HttpCookie("lastVisit"); aCookie.Value = DateTime.Now.ToString(); aCookie.Expires = DateTime.Now.AddDays(1); Response.Cookies.Add(aCookie); Reading Cookies: if(Request.Cookies["userName"] != null) Label1.Text = Server.HtmlEncode(Request.Cookies["userName"].Value); if(Request.Cookies["userName"] != null) { HttpCookie aCookie = Request.Cookies["userName"]; Label1.Text = Server.HtmlEncode(aCookie.Value); } Below link will give you full detailed information about cookies http://msdn.microsoft.com/en-us/library/ms178194.aspx

    Read the article

  • Windows Phone 7 development: first impressions

    - by DigiMortal
    After hard week in work I got some free time to play with Windows Phone 7 CTP developer tools. Although my first test application is still unfinished I think it is good moment to share my first experiences to you. In this posting I will give you quick overview of Windows Phone 7 developer tools from developer perspective. If you are familiar with Visual Studio 2010 then you will feel comfortable because Windows Phone 7 CTP developer tools base on Visual Studio 2010 Express. Project templates There are five project templates available. Three of them are based on Silverlight and two on XNA Game Studio: Windows Phone Application (Silverlight) Windows Phone List Application (Silverlight) Windows Phone Class Library (Silverlight) Windows Phone Game (XNA Game Studio) Windows Phone Game Library (XNA Game Studio) Currently I am writing to test applications. One of them is based on Windows Phone Application and the other on Windows Phone List Application project template. After creating these projects you see the following views in Visual Studio. Windows Phone Application. Click on image to enlarge. Windows Phone List Application. Click on image to enlarge.  I suggest you to use some of these templates to get started more easily. Windows Phone 7 emulator You can run your Windows Phone 7 applications on Windows Phone 7 emulator that comes with developer tools CTP. If you run your application then emulator is started automatically and you can try out how your application works in phone-like emulator. You can see screenshot of emulator on right. Currently there is opened Windows Phone List Application as it is created by default. Click on image to enlarge it. Emulator is a little bit slow and uncomfortable but it works pretty well. This far I have caused only couple of crashes during my experiments. In these cases emulator works but Visual Studio gets stuck because it cannot communicate with emulator. One important note. Emulator is based on virtual machine although you can see only phone screen and options toolbar. If you want to run emulator you must close all virtual machines running on your machine and run Visual Studio 2010 as administrator. Once you run emulator you can keep it open because you can stop your application in Visual Studio, modify, compile and re-deploy it without restarting emulator. Designing user interfaces You can design user interface of your application in Visual Studio. When you open XAML-files it is displayed in window with two panels. Left panel shows you device screen and works as visual design environment while right panel shows you XAML mark-up and let’s you modify XML if you need it. As it is one of my very first Silverlight applications I felt more comfortable with XAML editor because property names in property boxes of visual designer confused me a little bit. Designer panel is not very good because it is visually hard to follow. It has black background that makes dark borders of controls very hard to see. If you have monitor with very high contrast then it is may be not a real problem. I have usual monitor and I have problem. :) Putting controls on design surface, dragging and resizing them is also pretty painful. Some controls are drawn correctly but for some controls you have to set width and height in XML so they can be resized. After some practicing it is not so annoying anymore. On the right you can see toolbox with some controllers. This is all you get out of the box. But it is sufficient to get started. After getting some experiences you can create your own controls or use existing ones from other vendors or developers. If it is your first time to do stuff with Silverlight then keep Google open – you need it hard. After getting over the first shock you get the point very quickly and start developing at normal speed. :) Writing source code Writing source code is the most familiar part of this action. Good old Visual Studio code editor with all nice features it has. But here you get also some surprises: The anatomy of Silverlight controls is a little bit different than the one of user controls in web and forms projects. Windows Phone 7 doesn’t run on full version of Windows (I bet it is some version of Windows CE or something like this) then there is less system classes you can use. Some familiar classes have less methods that in full version of .NET Framework and in these cases you have to write all the code by yourself or find libraries or source code from somewhere. These problems are really not so much problems than limitations and you get easily over them. Conclusion Windows Phone 7 CTP developer tools help you do a lot of things on Windows Phone 7. Although I expected better performance from tools I think that current performance is not a problem. This far my first test project is going very well and Google has answer for almost every question. Windows Phone 7 is mobile device and therefore it has less hardware resources than desktop computers. This is why toolset is so limited. The more you need memory the more slower is device and as you may guess it needs the more battery. If you are writing apps for mobile devices then make your best to get your application use as few resources as possible and act as fast as possible.

    Read the article

  • How to build Open JavaFX for Android.

    - by PictureCo
    Here's a short recipe for baking JavaFX for Android dalvik. We will need just a few ingredients but each one requires special care. So let's get down to the business.  SourcesThe first ingredient is an open JavaFX repository. This should be piece of cake. As always there's a catch. You probably know that dalvik is jdk6 compatible  and also that certain APIs are missing comparing to good old java vm from Oracle.  Fortunately there is a repository which is a backport of regular OpenJFX to jdk7 and going from jdk7 to jdk6 is possible. The first thing to do is to clone or download the repository from https://bitbucket.org/narya/jfx78. Main page of the project says "It works in some cases" so we will presume that it will work in most cases As I've said dalvik vm misses some APIs which would lead to a build failures. To get them use another compatibility repository which is available on GitHub https://github.com/robovm/robovm-jfx78-compat. Download the zip and unzip sources into jfx78/modules/base.We need also a javafx binary stubs. Use jfxrt.jar from jdk8.The last thing to download are freetype sources from http://freetype.org. These will be necessary for native font rendering. Toolchain setup I have to point out that these instructions were tested only on linux. I suppose they will work with minimal changes also on Mac OS. I also presume that you were able to build open JavaFX. That means all tools like ant, gradle, gcc and jdk8 have been installed and are working all right. In addition to this you will need to download and install jdk7, Android SDK and Android NDK for native code compilation.  Installing all of them will take some time. Don't forget to put them in your path. export ANDROID_SDK=/opt/android-sdk-linux export ANDROID_NDK=/opt/android-ndk-r9b export JAVA_HOME=/opt/jdk1.7.0 export PATH=$JAVA_HOME/bin:$ANDROID_SDK/tools:$ANDROID_SDK/platform-tools:$ANDROID_NDK FreetypeUnzip freetype release sources first. We will have to cross compile them for arm. Firstly we will create a standalone toolchain for cross compiling installed in ~/work/ndk-standalone-19. $ANDROID_NDK/build/tools/make-standalone-toolchain.sh  --platform=android-19 --install-dir=~/work/ndk-standalone-19 After the standalone toolchain has been created cross compile freetype with following script: export TOOLCHAIN=~/work/freetype/ndk-standalone-19 export PATH=$TOOLCHAIN/bin:$PATH export FREETYPE=`pwd` ./configure --host=arm-linux-androideabi --prefix=$FREETYPE/install --without-png --without-zlib --enable-shared sed -i 's/\-version\-info \$(version_info)/-avoid-version/' builds/unix/unix-cc.mk make make install It will compile and install freetype library into $FREETYPE/install. We will link to this install dir later on. It would be possible also to link openjfx font support dynamically against skia library available on Android which already contains freetype. It creates smaller result but can have compatibility problems. Patching Download patches javafx-android-compat.patch + android-tools.patch and patch jfx78 repository. I recommend to have look at patches. First one android-compat.patch updates openjfx build script, removes dependency on SharedSecret classes and updates LensLogger to remove dependency on jdk specific PlatformLogger. Second one android-tools.patch creates helper script in android-tools. The script helps to setup javaFX Android projects. Building Now is time to try the build. Run following script: JAVA_HOME=/opt/jdk1.7.0 JDK_HOME=/opt/jdk1.7.0 ANDROID_SDK=/opt/android-sdk-linux ANDROID_NDK=/opt/android-ndk-r9b PATH=$JAVA_HOME/bin:$ANDROID_SDK/tools:$ANDROID_SDK/platform-tools:$ANDROID_NDK:$PATH gradle -PDEBUG -PDALVIK_VM=true -PBINARY_STUB=~/work/binary_stub/linux/rt/lib/ext/jfxrt.jar \ -PFREETYPE_DIR=~/work/freetype/install -PCOMPILE_TARGETS=android If everything went all right the output is in build/android-sdk Create first JavaFX Android project Use gradle script int android-tools. The script sets the project structure for you.   Following command creates Android HelloWorld project which links to a freshly built javafx runtime and to a HelloWorld application. NAME is a name of Android project. DIR where to create our first project. PACKAGE is package name required by Android. It has nothing to do with a packaging of javafx application. JFX_SDK points to our recently built runtime. JFX_APP points to dist directory of javafx application. (where all application jars sit) JFX_MAIN is fully qualified name of a main class. gradle -PDEBUG -PDIR=/home/user/work -PNAME=HelloWorld -PPACKAGE=com.helloworld \ -PJFX_SDK=/home/user/work/jfx78/build/android-sdk -PJFX_APP=/home/user/NetBeansProjects/HelloWorld/dist \ -PJFX_MAIN=com.helloworld.HelloWorld createProject Now cd to the created project and use it like any other android project. ant clean, debug, uninstall, installd will work. I haven't tried it from any IDE Eclipse nor Netbeans. Special thanks to Stefan Fuchs and Daniel Zwolenski for the repositories used in this blog post.

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • Leveraging Microsoft Patterns and Practices

    - by Tim Murphy
    I want to bring the Patterns and Practices group to the attention of those who have not already been exposed.  I have been a fan of the P&P team since they came out with the original Application Blocks which eventually turned into the Enterprise Library.  Their main purpose is to assemble guidance and tools that make it easier for all of us to build amazing solutions.  I would simply suggest you spend some time exploring the information and code libraries that they have produced.  Free resources are always a great find and I have used a number of the P&P solutions over the years with success.  If nothing else you may find some new ideas.  Enjoy. http://msdn.microsoft.com/en-us/practices/                           del.icio.us Tags: Patterns and Practices,Microsoft,Architecture,software development

    Read the article

  • Edit Media Center TV Recordings with Windows Live Movie Maker

    - by DigitalGeekery
    Have you ever wanted to take a TV program you’ve recorded in Media Center and remove the commercials or save clips of favorite scenes? Today we’ll take a look at editing WTV and DVR-MS files with Windows Live Movie Maker. Download and Install Windows Live Movie Maker. The download link can be found at the end of the article. WLMM is part of Windows Live Essentials, but you can choose to install only the applications you want. You’ll also want to be sure to uncheck any unwanted settings like settings Bing as default search provider or MSN as your browser home page.   Add your recorded TV file to WLMM by clicking the Add videos and photos button, or by dragging and dropping it onto the storyboard.   You’ll see your video displayed in the Preview window on the left and on the storyboard. Adjust the Zoom Time Scale slider at the lower right to change the level of detail displayed on the storyboard. You may want to start zoomed out and zoom in for more detailed edits.   Removing Commercials or Unwanted Sections Note: Changes and edits made in Windows Live Movie Maker do not change or effect the original video file. To accomplish this, we will makes cuts, or “splits,” and the beginning and end of the section we want to remove, and then we will delete that section from our project. Click and drag the slider bar along the the storyboard to scroll through the video. When you get to the end of a row in on the storyboard, drag the slider down to the beginning of the next row. We’ve found it easiest and most accurate to get close to the end of the commercial break and then use the Play button and the Previous Frame and Next Frame buttons underneath the Preview window to fine tune your cut point. When you find the right place to make your first cut, click the split button on the Edit tab on the ribbon. You will see your video “split” into two sections. Now, repeat the process of scrolling through the storyboard to find the end of the section you wish to cut. When you are at the proper point, click the Split button again.   Now we’ll delete that section by selecting it and pressing the Delete key, selecting remove on the Home tab, or by right clicking on the section and selecting Remove.   Trim Tool This tool allows you to select a portion of the video to keep while trimming away the rest.   Click and drag the sliders in the preview windows to select the area you want to keep. The area outside the sliders will be trimmed away. The area inside is the section that is kept in the movie. You can also adjust the Start and End points manually on the ribbon.   Delete any additional clips you don’t want in the final output. You can also accomplish this by using the Set start point and Set end point buttons. Clicking Set start point will eliminate everything before the start point. Set end point will eliminate everything after the end point. And you’re left with only the clip you want to keep.   Output your Video Select the icon at the top left, then select Save movie. All of these settings will output your movie as a WMV file, but file size and quality will vary by setting. The Burn to DVD option also outputs a WMV file, but then opens Windows DVD Maker and prompts you to create and burn a DVD.   Conclusion WLMM is one of the few applications that can edit WTV files, and it’s the only one we’re aware of that’s free. We should note only WTV and DVR-MS files will appear in the Recorded TV library in Media Center, so if you want to view your WMV output file in WMC you’ll need to add it to the Video or Movie library. Would you like to learn more about Windows Live Movie Maker? Check out are article on how to turn photos and home videos into movies with Windows Live Movie Maker. Need to add videos from a network location? WLMM doesn’t allow this by default, but you check out how to add network support to Windows Live Move Maker. Download Windows Live Similar Articles Productive Geek Tips Rotate a Video 90 degrees with VLC or Windows Live Movie MakerHow to Make/Edit a movie with Windows Movie Maker in Windows VistaFamily Fun: Share Photos with Photo Gallery and Windows Live SpacesAutomatically Mount and View ISO files in Windows 7 Media CenterAutomatically Start Windows 7 Media Center in Live TV Mode TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Get a free copy of WinUtilities Pro 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor

    Read the article

  • DNASTREAM’s RapidLaunch Oracle Accelerate solution for RightNow

    - by Richard Lefebvre
    The Oracle RightNow Accelerate solution from DNASTREAM allows each Customer to enjoy quicker deployment and earlier time to benefits from this SAAS Customer Experience solution. At the start of the project, a full suite of E-Learning simulations & materials is provided by DNASTREAM to match the customer’s processes. This RapidLaunch content library for RightNow can be leveraged by our customers early in their project implementations bringing significant cost efficiencies, time reduction and improved user adoption to their project roll outs. Solution Profile: This Oracle Accelerate solution is based on Oracle RightNow CX that includes Content management, Contact management, Incident management, Customer Portal, Closed incident Survey, Standard reports. As an additional option there is available the Oracle RightNow CX Chat implementation. For more information about RightNow and the DNASTREAM Accelerate solution, visit the Oracle Accelerate microsite or contact www.dnastream.com

    Read the article

  • UPK Content State

    - by peter.maravelias
    State is an editable property for communicating the status of a document in the UPK library. This is particularly helpful when working with other authors in a development team. Authors can assign a state to any document using the values that are defined in the master list. The default master list of State values includes Not Started, Draft, In Review, and Final (in the language installed on the server). Administrators can customize the list by adding, deleting, or renaming the values as well as sequencing the values as they will appear on the assignment list from the Properties pane. Let us know if or how you are using UPK Content States in your development efforts!

    Read the article

  • EntLib for Windows Azure

    - by kaleidoscope
    Enterprise Library popularly known as EntLib is a collection of Application Blocks targeted at managing oft needed redundant tasks in enterprise development, like Logging, Caching, Validation, Cryptography etc. Entlib currently exposes 9 application blocks: Caching Application Block Cryptography Application Block Data Access Application Block Exception Handling Application Block Logging Application Block Policy Injection Application Block Security Application Block Validation Application Block Unity Dependency Injection and Interception Mechanism Ever since the Honeymoon period of PoCs and tryouts is over and Azure started to mainstream and more precisely started to go “Enterprise”, Azure developers have been demanding EntLib for Azure. The demands seems to have finally been heard and the powers that be have bestowed us with the current beta release EntLib 5.0 which supports Windows Azure. The application blocks tailored for Azure are: Data Access Application Block (Think SQL Azure) Exception Handling Application Block (Windows Azure Diagnostics) Logging Application Block (Windows Azure Diagnostics) Validation Application Block Unity Dependency Injection Mechanism The EntLib 5.0 beta is now available for download. Technorati Tags: Sarang,EntLib,Azure

    Read the article

  • What can cause an increase in inactive memory and how to reclaim it?

    - by Boaz
    Hi All, I have heavy application running on a CentOS server and I'm seeing a strange memory behavior. Here is a snapshot of a munin graph: As you can see the amount of committed memory increases gradually causing the swap file to be use. What strikes me odd is that the amount of inactive memory keeps growing as well. It is my understanding that the inactive memory is actually memory freed up but not yet clean by the OS and put back in the free memory pool. It seems that running out of memory is acutally caused by this lack of clean up, but I may be wrong. Can you give some tips to find the cause of the problem and/or cause CentOS to reclaim the inactive memory? Thanks. Some extra info: 1) I have a tmpfs mounted on /tmp and the number of files stored there grows (but it is double the amount of the inactive memory). 2) cat /proc/meminfo (at a later stage than the image) gives: MemTotal: 14371428 kB MemFree: 1207108 kB Buffers: 35440 kB Cached: 4276628 kB SwapCached: 785316 kB Active: 9038924 kB Inactive: 3902876 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 14371428 kB LowFree: 1207108 kB SwapTotal: 10223608 kB SwapFree: 6438320 kB Dirty: 627792 kB Writeback: 0 kB AnonPages: 7844560 kB Mapped: 49304 kB Slab: 146676 kB PageTables: 27480 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 17409320 kB Committed_AS: 16471488 kB VmallocTotal: 34359738367 kB VmallocUsed: 275852 kB VmallocChunk: 34359462007 kB HugePages_Total: 0 HugePages_Free: 0 HugePages_Rsvd: 0 Hugepagesize: 2048 kB 3) The application is a combination of MySQL, Heritrix (http://crawler.archive.org/ ) and a Tomcat based Java servlet to manage things.

    Read the article

  • What is the best database design and/or software to model a thesaurus?

    - by Miles O'Keefe
    I would like to design a web app that functions as a simple thesaurus : a long list of words with attributes, all of which are linked to each other. Wikipedia defines it as: In Information Science, Library Science, and Information Technology, specialized thesauri are designed for information retrieval. They are a type of controlled vocabulary, for indexing or tagging purposes. Such a thesaurus can be used as the basis of an index for online material. The Art and Architecture Thesaurus, for example, is used to index the Canadian Information retrieval thesauri are formally organized so that existing relationships between concepts are made explicit. What database software, design or model would best fit this? Are PHP and MySQL good technologies to handle it?

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >