Search Results

Search found 1562 results on 63 pages for 'obtain'.

Page 29/63 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • How can I access cpu temperature readings using c#?

    - by Paul
    For a programming project I would like to access the temperature readings from my CPU and GPUs. I will be using C#. From various forums I get the impression that there is specific information and developer resources you need in order to access that information for various boards. I have a MSI NF750-G55 board. MSI's website does not have any of the information I am looking for. I tried their tech support and the rep I spoke with stated they do not have any such information. There must be a way to obtain that info. Any thoughts?

    Read the article

  • parallel vms in VMWare Server - how to configure network so they can ping each other?

    - by IronGoofy
    I'm using VMWare Server (currently on Version 1.0.7) and have two VMs that I would like to run at the same time. However, I'm having problems in setting them up so they can ping each other. I've configured them to use 'Bridged' networking. They both obtain an IP address from the DHCP server on my network, but after that they can't ping each other. It seems that only the first one has a functioning network connection (I can ping it from the host machine and Internet connection works), but the other one does not. If it helps, both VMs are running XP SP 3. Any ideas? Thanks!

    Read the article

  • I suspect that my HDD is causing hardlocks, as all other components have been replaced. How can I check this theory and solve the potential cause?

    - by user867814
    I have had this problem over quite a while now, thorough multiple Linux kernel versions and distributions, as well as replacement of all components, aside from my main HDD - RAM, GPU(twice), mother board, CPU, power supply. What happens is, at one point during the operation of the PC, it will hardlock - everything stops working, external HDD is not shut down correctly and continues to spin until I plug it out and in, there are no system/kernel logs of any kind, and no otherwise nothing that would suggest a cause. Another reason for my suspicion is that the failures happen almost exclusively during HDD read/write activity - shutdown(happens nearly 1/3 of the time so far, it's only been few days though), launching programs, and once during operation of apt. I hope the post is descriptive enough, if you need any additional info, ask(and tell me how to prepare/obtain it), and I will provide. If I'm wrong, point me in the right direction. Thanks in advance.

    Read the article

  • TYPO3 unable to log in

    - by Agata
    Hi, My company prepared a new website based on typo3 (don't know the version but they started to prepare it in October 2011 so it's probably from that time. A couple of days ago I got my user name and password but I'm unable to log in (Itried withIE, Firefox and Chrome). Each time I try typo3 behaves like I'd be entering wrong user name orpassword (for sure I'm entering the right one, I also tried other user's data it ended up the same way). I use Windows 7 and Kaspersky Internet Security 9.0.0.736 - might their settings block typo3? I really don't know what to do and can't obtain help from the IT department (in head office in another country)... Thank you for any suggestions in advance.

    Read the article

  • Apple push Notification Feedback service Not working

    - by Yassmeen
    Hi, I am developing an iPhone App that uses Apple Push Notifications. On the iPhone side everything is fine, on the server side I have a problem. Notifications are sent correctly however when I try to query the feedback service to obtain a list of devices from which the App has been uninstalled, I always get zero results. I know that I should obtain one result as the App has been uninstalled from one of my test devices. After 24 hours and more I still have no results from the feedback service.. Any ideas? Does anybody know how long it takes for the feedback service to recognize that my App has been uninstalled from my test device? Note: I have another push notification applications on the device so I know that my app is not the only app. The code - C#: public static string CheckFeedbackService(string certaName, string hostName) { SYLogger.Log("Check Feedback Service Started"); ServicePointManager.ServerCertificateValidationCallback = new RemoteCertificateValidationCallback(ValidateServerCertificate); // Create a TCP socket connection to the Apple server on port 2196 TcpClient tcpClientF = null; SslStream sslStreamF = null; string result = string.Empty; //Contect to APNS& Add the Apple cert to our collection X509Certificate2Collection certs = new X509Certificate2Collection { GetServerCert(certaName) }; //Set up byte[] buffer = new byte[38]; int recd = 0; DateTime minTimestamp = DateTime.Now.AddYears(-1); // Create a TCP socket connection to the Apple server on port 2196 try { using (tcpClientF = new TcpClient(hostName, 2196)) { SYLogger.Log("Client Connected ::" + tcpClientF.Connected); // Create a new SSL stream over the connection sslStreamF = new SslStream(tcpClientF.GetStream(), true,ValidateServerCertificate); // Authenticate using the Apple cert sslStreamF.AuthenticateAsClient(hostName, certs, SslProtocols.Default, false); SYLogger.Log("Stream Readable ::" + sslStreamF.CanRead); SYLogger.Log("Host Name ::"+hostName); SYLogger.Log("Cert Name ::" + certs[0].FriendlyName); if (sslStreamF != null) { SYLogger.Log("Connection Started"); //Get the first feedback recd = sslStreamF.Read(buffer, 0, buffer.Length); SYLogger.Log("Buffer length ::" + recd); //Continue while we have results and are not disposing while (recd > 0) { SYLogger.Log("Reading Started"); //Get our seconds since 1970 ? byte[] bSeconds = new byte[4]; byte[] bDeviceToken = new byte[32]; Array.Copy(buffer, 0, bSeconds, 0, 4); //Check endianness if (BitConverter.IsLittleEndian) Array.Reverse(bSeconds); int tSeconds = BitConverter.ToInt32(bSeconds, 0); //Add seconds since 1970 to that date, in UTC and then get it locally var Timestamp = new DateTime(1970, 1, 1, 0, 0, 0, DateTimeKind.Utc).AddSeconds(tSeconds).ToLocalTime(); //Now copy out the device token Array.Copy(buffer, 6, bDeviceToken, 0, 32); string deviceToken = BitConverter.ToString(bDeviceToken).Replace("-", "").ToLower().Trim(); //Make sure we have a good feedback tuple if (deviceToken.Length == 64 && Timestamp > minTimestamp) { SYLogger.Log("Feedback " + deviceToken); result = deviceToken; } //Clear array to reuse it Array.Clear(buffer, 0, buffer.Length); //Read the next feedback recd = sslStreamF.Read(buffer, 0, buffer.Length); } SYLogger.Log("Reading Ended"); } } } catch (Exception e) { SYLogger.Log("Authentication failed - closing the connection::" + e); return "NOAUTH"; } finally { // The client stream will be closed with the sslStream // because we specified this behavior when creating the sslStream. if (sslStreamF != null) sslStreamF.Close(); if (tcpClientF != null) tcpClientF.Close(); //Clear array on error Array.Clear(buffer, 0, buffer.Length); } SYLogger.Log("Feedback ended "); return result; }

    Read the article

  • How to implement wait(); to wait for a notifyAll(); from enter button?

    - by Dakota Miller
    Sorry for the confusion I posted the Worng Logcat info. I updated the question. I want to click Start to start a thread then when enter is clicked i want the thad to continue and get the message and handle the message in the thread then output it to the main thread and update the text view. How would i start a thread to wait for enter to be pressed and get the bundle for the Handler? Here is my Code: public class MainActivity extends Activity implements OnClickListener { Handler mHandler; Button enter; Button start; TextView display; String dateString; @Override protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.activity_main); enter = (Button) findViewById(R.id.enter); start = (Button) findViewById(R.id.start); display = (TextView) findViewById(R.id.Display); enter.setOnClickListener(this); start.setOnClickListener(this); mHandler = new Handler() { <=============================This is Line 31 public void handleMessage(Message msg) { // TODO Auto-generated method stub super.handleMessage(msg); Bundle bundle = msg.getData(); String string = bundle.getString("outKey"); display.setText(string); } }; } @Override public boolean onCreateOptionsMenu(Menu menu) { // Inflate the menu; this adds items to the action bar if it is present. getMenuInflater().inflate(R.menu.main, menu); return true; } @Override public void onClick(View v) { // TODO Auto-generated method stub switch (v.getId()) { case R.id.enter: Message msgin = Message.obtain(); Bundle bundlein = new Bundle(); String in = "It Works!"; bundlein.putString("inKey", in); msgin.setData(bundlein); notifyAll(); break; case R.id.start: new myThread().hello.start(); break; } } public class myThread extends Thread { Thread hello = new Thread() { @Override public void run() { // TODO Auto-generated method stub super.run(); Looper.prepare(); try { wait(); } catch (InterruptedException e) { // TODO Auto-generated catch block e.printStackTrace(); } Handler Mhandler = new Handler() { @Override public void handleMessage(Message msg) { // TODO Auto-generated method stub super.handleMessage(msg); Bundle bundle = msg.getData(); dateString = bundle.getString("inKey"); } }; Looper.loop(); Message msg = Message.obtain(); Bundle bundle = new Bundle(); bundle.putString("outKey", dateString); msg.setData(bundle); mHandler.sendMessage(msg); } }; } } Here is the logcat info: 06-27 00:00:24.832: E/AndroidRuntime(18513): FATAL EXCEPTION: Thread-1210 06-27 00:00:24.832: E/AndroidRuntime(18513): java.lang.IllegalMonitorStateException: object not locked by thread before wait() 06-27 00:00:24.832: E/AndroidRuntime(18513): at java.lang.Object.wait(Native Method) 06-27 00:00:24.832: E/AndroidRuntime(18513): at java.lang.Object.wait(Object.java:364) 06-27 00:00:24.832: E/AndroidRuntime(18513): at com .example.learninghandlers.MainActivity$myThread$1.run(MainActivity.java:77)

    Read the article

  • LINQ-like or SQL-like DSL for end-users to run queries to select (not modify) data?

    - by Mark Rushakoff
    For a utility I'm working on, the client would like to be able to generate graphic reports on the data that has been collected. I can already generate a couple canned graphs (using ZedGraph, which is a very nice library); however, the utility would be much more flexible if the graphs were more programmable or configurable by the end-user. TLDR version I want users to be able to use something like SQL to safely extract and select data from a List of objects that I provide and can describe. What free tools or libraries will help me accomplish this? Full version I've given thought to using IronPython, IronRuby, and LuaInterface, but frankly they're all a bit overpowered for what I want to do. My classes are fairly simple, along the lines of: class Person: string Name; int HeightInCm; DateTime BirthDate; Weight[] WeighIns; class Weight: int WeightInKg; DateTime Date; Person Owner; (exact classes have been changed to protect the innocent). To come up with the data for the graph, the user will choose whether it's a bar graph, scatter plot, etc., and then to actually obtain the data, I would like to obtain some kind of List from the user simply entering something SQL-ish along the lines of SELECT Name, AVG(WeighIns) FROM People SELECT WeightInKg, Owner.HeightInCm FROM Weights And as a bonus, it would be nice if you could actually do operations as well: SELECT WeightInKg, (Date - Owner.BirthDate) AS Age FROM Weights The DSL doesn't have to be compliant SQL in any way; it doesn't even have to resemble SQL, but I can't think of a more efficient descriptive language for the task. I'm fine filling in blanks; I don't expect a library to do everything for me. What I would expect to exist (but haven't been able to find in any way, shape, or form) is something like Fluent NHibernate (which I am already using in the project) where I can declare a mapping, something like var personRequest = Request<Person>(); personRequest.Item("Name", (p => p.Name)); personRequest.Item("HeightInCm", (p => p.HeightInCm)); personRequest.Item("HeightInInches", (p => p.HeightInCm * CM_TO_INCHES)); // ... var weightRequest = Request<Weight>(); weightRequest.Item("Owner", (w => w.Owner), personRequest); // Indicate a chain to personRequest // ... var people = Table<Person>("People", GetPeopleFromDatabase()); var weights = Table<Weight>("Weights", GetWeightsFromDatabase()); // ... TryRunQuery(userInputQuery); LINQ is so close to what I want to do, but AFAIK there's no way to sandbox it. I don't want to expose any unnecessary functionality to the end user; meaning I don't want the user to be able to send in and process: from p in people select (p => { System.IO.File.Delete("C:\\something\\important"); return p.Name }) So does anyone know of any free .NET libraries that allow something like what I've described above? Or is there some way to sandbox LINQ? cs-script is close too, but it doesn't seem to offer sandboxing yet either. I'd be hesitant to expose the NHibernate interface either, as the user should have a read-only view of the data at this point in the usage. I'm using C# 3.5, and pure .NET solutions would be preferred. The bottom line is that I'm really trying to avoid writing my own parser for a subset of SQL that would only apply to this single project.

    Read the article

  • Tool or library for end-users to run queries to select (not modify) data?

    - by Mark Rushakoff
    For a utility I'm working on, the client would like to be able to generate graphic reports on the data that has been collected. I can already generate a couple canned graphs (using ZedGraph, which is a very nice library); however, the utility would be much more flexible if the graphs were more programmable or configurable by the end-user. TLDR version I want users to be able to use something like SQL to safely extract and select data from a List of objects that I provide and can describe. What free tools or libraries will help me accomplish this? Full version I've given thought to using IronPython, IronRuby, and LuaInterface, but frankly they're all a bit overpowered for what I want to do. My classes are fairly simple, along the lines of: class Person: string Name; int HeightInCm; DateTime BirthDate; Weight[] WeighIns; class Weight: int WeightInKg; DateTime Date; Person Owner; (exact classes have been changed to protect the innocent). To come up with the data for the graph, the user will choose whether it's a bar graph, scatter plot, etc., and then to actually obtain the data, I would like to obtain some kind of List from the user simply entering something SQL-ish along the lines of SELECT Name, AVG(WeighIns) FROM People SELECT WeightInKg, Owner.HeightInCm FROM Weights And as a bonus, it would be nice if you could actually do operations as well: SELECT WeightInKg, (Date - Owner.BirthDate) AS Age FROM Weights The DSL doesn't have to be compliant SQL in any way; it doesn't even have to resemble SQL, but I can't think of a more efficient descriptive language for the task. I'm fine filling in blanks; I don't expect a library to do everything for me. What I would expect to exist (but haven't been able to find in any way, shape, or form) is something like Fluent NHibernate (which I am already using in the project) where I can declare a mapping, something like var personRequest = Request<Person>(); personRequest.Item("Name", (p => p.Name)); personRequest.Item("HeightInCm", (p => p.HeightInCm)); personRequest.Item("HeightInInches", (p => p.HeightInCm * CM_TO_INCHES)); // ... var weightRequest = Request<Weight>(); weightRequest.Item("Owner", (w => w.Owner), personRequest); // Indicate a chain to personRequest // ... var people = Table<Person>("People", GetPeopleFromDatabase()); var weights = Table<Weight>("Weights", GetWeightsFromDatabase()); // ... TryRunQuery(userInputQuery); LINQ is so close to what I want to do, but AFAIK there's no way to sandbox it. I don't want to expose any unnecessary functionality to the end user; meaning I don't want the user to be able to send in and process: from p in people select (p => { System.IO.File.Delete("C:\\something\\important"); return p.Name }) So does anyone know of any free .NET libraries that allow something like what I've described above? Or is there some way to sandbox LINQ? cs-script is close too, but it doesn't seem to offer sandboxing yet either. I'd be hesitant to expose the NHibernate interface either, as the user should have a read-only view of the data at this point in the usage. I'm using C# 3.5, and pure .NET solutions would be preferred. The bottom line is that I'm really trying to avoid writing my own parser for a subset of SQL that would only apply to this single project.

    Read the article

  • Qt: TableWidget's ItemAt() acting weirdly

    - by emredog
    Hi, i'm working on a windows application, where in a dialog i query some data from Postgres, and manually show the output in a table widget. m_ui->tableWidget->setRowCount(joinedData.count()); for(int i=0; i<joinedData.count(); i++) //for each row { m_ui->tableWidget->setItem(i, 0, new QTableWidgetItem(joinedData[i].bobin.referenceNumber)); m_ui->tableWidget->setItem(i, 1, new QTableWidgetItem(QString::number(joinedData[i].bobin.width))); m_ui->tableWidget->setItem(i, 2, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getHole()))); m_ui->tableWidget->setItem(i, 3, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getLessThanZeroFive()))); m_ui->tableWidget->setItem(i, 4, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getZeroFive_to_zeroSeven()))); m_ui->tableWidget->setItem(i, 5, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getZeroFive_to_zeroSeven_repetitive()))); m_ui->tableWidget->setItem(i, 6, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getZeroSeven_to_Three()))); m_ui->tableWidget->setItem(i, 7, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getThree_to_five()))); m_ui->tableWidget->setItem(i, 8, new QTableWidgetItem(QString::number(joinedData[i].tolerance.getMoreThanFive()))); } Also, based on row and column information, i paint some of these tablewidgetitems to some colors, but i don't think it's relevant. I reimplemented the QDialog's contextMenuEvent, to obtain the right clicked tableWidgetItem's row and column coordinates: void BobinFlanView::contextMenuEvent(QContextMenuEvent *event) { QMenu menu(m_ui->tableWidget); //standard actions menu.addAction(this->markInactiveAction); menu.addAction(this->markActiveAction); menu.addSeparator(); menu.addAction(this->exportAction); menu.addAction(this->exportAllAction); //obtain the rightClickedItem QTableWidgetItem* clickedItem = m_ui->tableWidget->itemAt(m_ui->tableWidget->mapFromGlobal(event->globalPos())); // if it's a colored one, add some more actions if (clickedItem && clickedItem->column()>1 && clickedItem->row()>0) { //this is a property, i'm keeping this for a later use this->lastRightClickedItem = clickedItem; //debug purpose: QMessageBox::information(this, "", QString("clickedItem = %1, %2").arg(clickedItem->row()).arg(clickedItem->column())); QMessageBox::information(this, "", QString("globalClick = %1, %2\ntransformedPos = %3, %4").arg(event->globalX()).arg(event->globalY()) .arg(m_ui->tableWidget->mapFromGlobal(event->globalPos()).x()).arg(m_ui->tableWidget->mapFromGlobal(event->globalPos()).y())); menu.addSeparator(); menu.addAction(this->changeSelectedToleranceToUygun); menu.addAction(this->changeSelectedToleranceToUyar); menu.addAction(this->changeSelectedToleranceToDurdurUyar); //... some other irrevelant 'enable/disable' activities menu.exec(event->globalPos()); } The problem is, when i right click on the same item i get the same global coordinates, but randomly different row-column information. For instance, the global pos is exactly 600,230 but row-column pair is randomly (5,3) and (4,3). I mean, what?! Also, when i click to an item from the last to rows (later than 13, i guess) will never go into condition "if (clickedItem && clickedItem-column()1 && clickedItem-row()0)", i think it's mainly because 'clickedItem' is null. I'll be more than glad to share any more information, or even the full cpp-h-ui trio in order to get help. Thanks a lot.

    Read the article

  • SQLAuthority News – Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 – Microsoft Whitepaper

    - by pinaldave
    I recently presented session on Statistics and Best Practices in Virtual Tech Days on Nov 22, 2010. The sessions was very popular and I got many questions right after the sessions. The number question I had received was where everybody can get the further information. I am very much happy that my sessions created some curiosity for one of the most important feature of the SQL Server. Statistics are the heart of the SQL Server. Microsoft has published a white paper on the subject how statistics are useful to Query Optimizer. Here is the abstract of the same white paper from Microsoft. Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 Writer: Eric N. Hanson and Yavor Angelov Microsoft SQL Server 2008 collects statistical information about indexes and column data stored in the database. These statistics are used by the SQL Server query optimizer to choose the most efficient plan for retrieving or updating data. This paper describes what data is collected, where it is stored, and which commands create, update, and delete statistics. By default, SQL Server 2008 also creates and updates statistics automatically, when such an operation is considered to be useful. This paper also outlines how these defaults can be changed on different levels (column, table, and database). In addition, it presents how certain query language features, such as Transact-SQL variables, interact with use of statistics by the optimizer, and it provides guidance for using these features when writing queries so you can obtain good query performance. Link to white paper Statistics Used by the Query Optimizer in Microsoft SQL Server 2008 ?Reference: Pinal Dave (http://blog.SQLAuthority.com)   Filed under: Pinal Dave, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL White Papers, SQLAuthority News, T SQL, Technology

    Read the article

  • SQL SERVER – Configure Management Data Collection in Quick Steps – T-SQL Tuesday #005

    - by pinaldave
    This article was written as a response to T-SQL Tuesday #005 – Reporting. The three most important components of any computer and server are the CPU, Memory, and Hard disk specification. This post talks about  how to get more details about these three most important components using the Management Data Collection. Management Data Collection generates the reports for the three said components by default. Configuring Data Collection is a very easy task and can be done very quickly. Please note: There are many different ways to get reports generated for CPU, Memory and IO. You can use DMVs, Extended Events as well Perfmon to trace the data. Keeping the T-SQL Tuesday subject of reporting this post is created to give visual tutorial to quickly configure Data Collection and generate Reports. From Book On-Line: The data collector is a core component of the Data Collection platform for SQL Server 2008 and the tools that are provided by SQL Server. The data collector provides one central point for data collection across your database servers and applications. This collection point can obtain data from a variety of sources and is not limited to performance data, unlike SQL Trace. Let us go over the visual tutorial on how quickly Data Collection can be configured. Expand the management node under the main server node and follow the direction in the pictures. This reports can be exported to PDF as well Excel by writing clicking on reports. Now let us see more additional screenshots of the reports. The reports are very self-explanatory  but can be drilled down to get further details. Click on the image to make it larger. Well, as we can see, it is very easy to configure and utilize this tool. Do you use this tool in your organization? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Reporting, SQL Reports

    Read the article

  • Azure Grid Computing - Worker Roles as HPC Compute Nodes

    - by JoshReuben
    Overview ·        With HPC 2008 R2 SP1 You can add Azure worker roles as compute nodes in a local Windows HPC Server cluster. ·        The subscription for Windows Azure like any other Azure Service - charged for the time that the role instances are available, as well as for the compute and storage services that are used on the nodes. ·        Win-Win ? - Azure charges the computer hour cost (according to vm size) amortized over a month – so you save on purchasing compute node hardware. Microsoft wins because you need to purchase HPC to have a local head node for managing this compute cluster grid distributed in the cloud. ·        Blob storage is used to hold input & output files of each job. I can see how Parametric Sweep HPC jobs can be supported (where the same job is run multiple times on each node against different input units), but not MPI.NET (where different HPC Job instances function as coordinated agents and conduct master-slave inter-process communication), unless Azure is somehow tunneling MPI communication through inter-WorkerRole Azure Queues. ·        this is not the end of the story for Azure Grid Computing. If MS requires you to purchase a local HPC license (and administrate it), what's to stop a 3rd party from doing this and encapsulating exposing HPC WCF Broker Service to you for managing compute nodes? If MS doesn’t  provide head node as a service, someone else will! Process ·        requires creation of a worker node template that specifies a connection to an existing subscription for Windows Azure + an availability policy for the worker nodes. ·        After worker nodes are added to the cluster, you can start them, which provisions the Windows Azure role instances, and then bring them online to run HPC cluster jobs. ·        A Windows Azure worker role instance runs a HPC compatible Azure guest operating system which runs on the VMs that host your service. The guest operating system is updated monthly. You can choose to upgrade the guest OS for your service automatically each time an update is released - All role instances defined by your service will run on the guest operating system version that you specify. see Windows Azure Guest OS Releases and SDK Compatibility Matrix (http://go.microsoft.com/fwlink/?LinkId=190549). ·        use the hpcpack command to upload file packages and install files to run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Requirements ·        assuming you have an azure subscription account and the HPC head node installed and configured. ·        Install HPC Pack 2008 R2 SP 1 -  see Microsoft HPC Pack 2008 R2 Service Pack 1 Release Notes (http://go.microsoft.com/fwlink/?LinkID=202812). ·        Configure the head node to connect to the Internet - connectivity is provided by the connection of the head node to the enterprise network. You may need to configure a proxy client on the head node. Any cluster network topology (1-5) is supported). ·        Configure the firewall - allow outbound TCP traffic on the following ports: 80,       443, 5901, 5902, 7998, 7999 ·        Note: HPC Server  uses Admin Mode (Elevated Privileges) in Windows Azure to give the service administrator of the subscription the necessary privileges to initialize HPC cluster services on the worker nodes. ·        Obtain a Windows Azure subscription certificate - the Windows Azure subscription must be configured with a public subscription (API) certificate -a valid X.509 certificate with a key size of at least 2048 bits. Generate a self-sign certificate & upload a .cer file to the Windows Azure Portal Account page > Manage my API Certificates link. see Using the Windows Azure Service Management API (http://go.microsoft.com/fwlink/?LinkId=205526). ·        import the certificate with an associated private key on the HPC cluster head node - into the trusted root store of the local computer account. Obtain Windows Azure Connection Information for HPC Server ·        required for each worker node template ·        copy from azure portal - Get from: navigation pane > Hosted Services > Storage Accounts & CDN ·        Subscription ID - a 32-char hex string in the form xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx. In Properties pane. ·        Subscription certificate thumbprint - a 40-char hex string (you need to remove spaces). In Management Certificates > Properties pane. ·        Service name - the value of <ServiceName> configured in the public URL of the service (http://<ServiceName>.cloudapp.net). In Hosted Services > Properties pane. ·        Blob Storage account name - the value of <StorageAccountName> configured in the public URL of the account (http://<StorageAccountName>.blob.core.windows.net). In Storage Accounts > Properties pane. Import the Azure Subscription Certificate on the HPC Head Node ·        enable the services for Windows HPC Server  to authenticate properly with the Windows Azure subscription. ·        use the Certificates MMC snap-in to import the certificate to the Trusted Root Certification Authorities store of the local computer account. The certificate must be in PFX format (.pfx or .p12 file) with a private key that is protected by a password. ·        see Certificates (http://go.microsoft.com/fwlink/?LinkId=163918). ·        To open the certificates snapin: Run > mmc. File > Add/Remove Snap-in > certificates > Computer account > Local Computer ·        To import the certificate via wizard - Certificates > Trusted Root Certification Authorities > Certificates > All Tasks > Import ·        After the certificate is imported, it appears in the details pane in the Certificates snap-in. You can open the certificate to check its status. Configure a Proxy Client on the HPC Head Node ·        the following Windows HPC Server services must be able to communicate over the Internet (through the firewall) with the services for Windows Azure: HPCManagement, HPCScheduler, HPCBrokerWorker. ·        Create a Windows Azure Worker Node Template ·        Edit HPC node templates in HPC Node Template Editor. ·        Specify: 1) Windows Azure subscription connection info (unique service name) for adding a set of worker nodes to the cluster + 2)worker node availability policy – rules for deploying / removing worker role instances in Windows Azure o   HPC Cluster Manager > Configuration > Navigation Pane > Node Templates > Actions pane > New à Create Node Template Wizard or Edit à Node Template Editor o   Choose Node Template Type page - Windows Azure worker node template o   Specify Template Name page – template name & description o   Provide Connection Information page – Azure Subscription ID (text) & Subscription certificate (browse) o   Provide Service Information page - Azure service name + blob storage account name (optionally click Retrieve Connection Information to get list of available from azure – possible LRT). o   Configure Azure Availability Policy page - how Windows Azure worker nodes start / stop (online / offline the worker role instance -  add / remove) – manual / automatic o   for automatic - In the Configure Windows Azure Worker Availability Policy dialog -select days and hours for worker nodes to start / stop. ·        To validate the Windows Azure connection information, on the template's Connection Information tab > Validate connection information. ·        You can upload a file package to the storage account that is specified in the template - eg upload application or service files that will run on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). Add Azure Worker Nodes to the HPC Cluster ·        Use the Add Node Wizard – specify: 1) the worker node template, 2) The number of worker nodes   (within the quota of role instances in the azure subscription), and 3)           The VM size of the worker nodes : ExtraSmall, Small, Medium, Large, or ExtraLarge.  ·        to add worker nodes of different sizes, must run the Add Node Wizard separately for each size. ·        All worker nodes that are added to the cluster by using a specific worker node template define a set of worker nodes that will be deployed and managed together in Windows Azure when you start the nodes. This includes worker nodes that you add later by using the worker node template and, if you choose, worker nodes of different sizes. You cannot start, stop, or delete individual worker nodes. ·        To add Windows Azure worker nodes o   In HPC Cluster Manager: Node Management > Actions pane > Add Node à Add Node Wizard o   Select Deployment Method page - Add Azure Worker nodes o   Specify New Nodes page - select a worker node template, specify the number and size of the worker nodes ·        After you add worker nodes to the cluster, they are in the Not-Deployed state, and they have a health state of Unapproved. Before you can use the worker nodes to run jobs, you must start them and then bring them online. ·        Worker nodes are numbered consecutively in a naming series that begins with the root name AzureCN – this is non-configurable. Deploying Windows Azure Worker Nodes ·        To deploy the role instances in Windows Azure - start the worker nodes added to the HPC cluster and bring the nodes online so that they are available to run cluster jobs. This can be configured in the HPC Azure Worker Node Template – Azure Availability Policy -  to be automatic or manual. ·        The Start, Stop, and Delete actions take place on the set of worker nodes that are configured by a specific worker node template. You cannot perform one of these actions on a single worker node in a set. You also cannot perform a single action on two sets of worker nodes (specified by two different worker node templates). ·        ·          Starting a set of worker nodes deploys a set of worker role instances in Windows Azure, which can take some time to complete, depending on the number of worker nodes and the performance of Windows Azure. ·        To start worker nodes manually and bring them online o   In HPC Node Management > Navigation Pane > Nodes > List / Heat Map view - select one or more worker nodes. o   Actions pane > Start – in the Start Azure Worker Nodes dialog, select a node template. o   the state of the worker nodes changes from Not Deployed to track the provisioning progress – worker node Details Pane > Provisioning Log tab. o   If there were errors during the provisioning of one or more worker nodes, the state of those nodes is set to Unknown and the node health is set to Unapproved. To determine the reason for the failure, review the provisioning logs for the nodes. o   After a worker node starts successfully, the node state changes to Offline. To bring the nodes online, select the nodes that are in the Offline state > Bring Online. ·        Troubleshooting o   check node template. o   use telnet to test connectivity: telnet <ServiceName>.cloudapp.net 7999 o   check node status - Deployment status information appears in the service account information in the Windows Azure Portal - HPC queries this -  see  node status information for any failed nodes in HPC Node Management. ·        When role instances are deployed, file packages that were previously uploaded to the storage account using the hpcpack command are automatically installed. You can also upload file packages to storage after the worker nodes are started, and then manually install them on the worker nodes. see hpcpack (http://go.microsoft.com/fwlink/?LinkID=205514). ·        to remove a set of role instances in Windows Azure - stop the nodes by using HPC Cluster Manager (apply the Stop action). This deletes the role instances from the service and changes the state of the worker nodes in the HPC cluster to Not Deployed. ·        Each time that you start a set of worker nodes, two proxy role instances (size Small) are configured in Windows Azure to facilitate communication between HPC Cluster Manager and the worker nodes. The proxy role instances are not listed in HPC Cluster Manager after the worker nodes are added. However, the instances appear in the Windows Azure Portal. The proxy role instances incur charges in Windows Azure along with the worker node instances, and they count toward the quota of role instances in the subscription.

    Read the article

  • Book Review &ndash; Developer&rsquo;s Guide To Collections in Microsoft&reg; .NET

    - by Lori Lalonde
    Developer’s Guide To Collections in Microsoft® .NET, by Calvin Janes, discusses the various collections available in the built-in NET libraries, as well as  the advantages and disadvantages of using each type of collection. Other areas are also covered including how collections utilize memory, how to use LINQ with collections, using threading with collections, serializing collections, and how to bind collections to controls in Windows Forms, WPF and Silverlight. For developers looking for a simple reference book on collections, then this book will serve that purpose and serve it well. For those looking for a great read from cover-to-cover, they may be disappointed. This book tends to be repetitive in discussion topics, examples, and code samples in the first two parts of the book. In the first part, the author conducts walk-throughs to develop custom collections. In  the second part, the author conducts walk-throughs on using the built-in .NET collections. For experienced .NET developers, the first two parts will not provide much value. However, it is beneficial for new developers who have not worked with the built-in collections in .NET. They will obtain an understanding of the mechanics of the built-in collections and how memory is utilized when using the various types of collections. So in this aspect, new developers will get more value out of this book. The third and fourth parts delve into advanced topics, including using LINQ, threading, serialization and data binding. I find these two parts of the book are well written and flow better than the first two parts. Both beginner and experienced developers will find value in this half of the book, mainly on the topics of threading and serialization. The eBook format of this book was provided free through O'Reilly's Blogger Review program. This book can be purchased from the O'Reilly book store at: http://shop.oreilly.com/product/0790145317193.do

    Read the article

  • .NET Rocks is on the Road Again!

    - by Scott Spradlin
    Carl and Richard are loading up the DotNetMobile (a 30 foot RV) and driving to our town again to show off their favorite bits of Visual Studio 2010 and .NET 4.0! Richard talks about Web load testing and Carl talks about Silverlight 4.0 and multimedia. And to make the night even more fun, they are going to bring a mystery rock star from the Visual Studio world to the event and interview them for a special .NET Rocks Road Trip show series. Along the way we’ll be giving away some great prizes, showing off some awesome technology and having a ton of laughs. So come out to the most fun you can have in a geeky evening - and learn a few things along the way about web load testing and Silverlight 4! And one lucky person at the event will win "Ride Along with Carl and Richard" and get to board the RV and ride with the boys to the next town on the tour -- Chicago. (don’t worry, they will get you home again!) So come out to the most fun you can have in a geeky evening – and find out what’s new and cool in Visual Studio 2010! To get insure we have sufficient food for everyone, please register for this event at http://stlnet.eventbrite.com This registration information will only be used to obtain accurate counts for food preparation. All other answers are optional and will be used for purely statistical analysis. No information will be shared outside the St. Louis .NET User Group. Here is a list of prizes to be given away at the event: Telerik Premium Collection Pre-Emptive One Year Commercial Runtime Intelligence license Red Gate ANTS Memory Profiler Quest Toad Extension for Visual Studio DevExpress Code Rush and Refactor Pro Grape City Active Report/BI Suite Grape City Spread 5.0 JetBrains Resharper Component One Studio for ASP.NET Component One Studio for Silverlight Please check out the event sponsors: Visit http://www.dotnetrocks.com/roadtrip for more information! Thursday, April 29, 2010 6:00 pm - Food and social 6:30 pm - .NET Rocks Interview 7:15 pm - Richard Campbell 8:00 pm - Carl Franklin 8:45 pm - prizes!

    Read the article

  • Is there a way to track data structure dependencies from the database, through the tiers, all the way out to a web page?

    - by Sean Mickey
    When we design applications, we generally end up with the same tiered sets of data structures: A persistent data structure that is described using DDL and implemented as RDBMS tables and columns. A set of domain objects that consist primarily of data structures, usually combined with business-rule level logic, that are implemented in a programming language such as Java. A set of service layer interfaces that directly support use case implementations (which use the domain data structures as parameters), implemented as EJBs or something equivalent in another programming language. UI screens that allow users to C reate, R etrieve, U pdate, and (maybe) D elete all manner of data structures and graphs of data structures, with numerous screens and with multiple UI widgets, all structured to support the same data structures. But if you want to change the data structures in any of these tiers, it always seems extremely difficult to assess the impact(s) the change will have across the application. UML can help, but tracing through diagram after diagram is not a real solution to this problem. The best I have ever seen was a homespun data tracking spreadsheet document that listed all of the data structures and walked the relationships from tier-to-tier. Is there a tool or accepted approach that makes it easy to identify a data structure in any tier and easily obtain a list of all dependent: database table and column data structures domain object data structures service layer interface methods and parameter data structures screen & UI component data structures

    Read the article

  • DDD Melbourne -lessons leant

    - by Michael Freidgeim
    I've attended DDD Melbourne and want to list the interesting points, that I've leant and want to follow. To read more: * Moles-Mocking Isolation framework for .NET. Documentation is here.   (See also Mocking frameworks comparison created October 4, 2009 ) * WebFormsMVP * PluralSight   http://www.pluralsight-training.net/offers/default.aspx?cc=trial   * ELMAH: Error Logging Modules and Handlers *Rhino.Mocks   * VS UI Test Recorder -see posts Visual Studio 2010 Coded UI Test User Guide. Note that Microsoft Test Manager (MTM) toolis a separate application, that can be started from Program files/VS 2010 menu.It is not a menu inside Visual Studio.   * CodeContract- seems great in Debug. Will be good if in production  will be possible runtime configuration, ability to log instead of throw exception. Current recommendation to customize Debug.Assert is not trivial The programmer is free to use the customization provided by Debug.Assert using assert listeners to obtain whatever runtime behavior they desire (e.g., ignoring the error, logging it, or throwing an exception).   // Clears the existing list of assert listener (the default pop-up box) System.Diagnostics.Debug.Listeners.Clear(); // Install your own listener System.Diagnostics.Debug.Listeners.Add(MyTraceListener); Note that you can't catch specific ContractException, but can catch generic Exception(see How come you cannot catch Code Contract exceptions?)   Books recommended "Working effectively with legacy code" by Michael Feathers (corresponding article)   Fowler, Martin Refactoring: Improving the Design of Existing Code, slides http://jaoo.dk/jaoo1999/schedule/MartinFowlerRefractoring.pdf

    Read the article

  • Refactoring in domain driven design

    - by Andrew Whitaker
    I've just started working on a project and we're using domain-driven design (as defined by Eric Evans in Domain-Driven Design: Tackling Complexity in the Heart of Software. I believe that our project is certainly a candidate for this design pattern as Evans describes it in his book. I'm struggling with the idea of constantly refactoring. I know refactoring is a necessity in any project and will happen inevitably as the software changes. However, in my experience, refactoring occurs when the needs of the development team change, not as understanding of the domain changes ("refactoring to greater insight" as Evans calls it). I'm most concerned with breakthroughs in understanding of the domain model. I understand making small changes, but what if a large change in the model is necessary? What's an effective way of convincing yourself (and others) you should refactor after you obtain a clearer domain model? After all, refactoring to improve code organization or performance could be completely separate from how expressive in terms of the ubiquitous language code is. Sometimes it just seems like there's not enough time to refactor. Luckily, SCRUM lends it self to refactoring. The iterative nature of SCRUM makes it easy to build a small piece and change and it. But over time that piece will get larger and what if you have a breakthrough after that piece is so large that it will be too difficult to change? Has anyone worked on a project employing domain-driven design? If so, it would be great to get some insight on this one. I'd especially like to hear some success stories, since DDD seems very difficult to get right. Thanks!

    Read the article

  • Add a Graphical User Interface (GUI) to the Microsoft Robocopy Command Line Tool

    - by Lori Kaufman
    Robocopy, or “Robust File Copy,” is a command line directory replication tool from Microsoft. It is available as part of Windows 7 and Vista as a standard feature, and was available as part of the Windows Server 2003 Resource Kit. NOTE: For Windows XP, you can obtain Robocopy by downloading the resource kit. Robocopy allows you to setup simple or advanced backup strategies. It provides such features as multi-threaded copying, mirroring or synchronization mode, automatic retry, and the ability to resume the copying process. If you are comfortable with using command line tools, you can run Robocopy directly on the command line using the command syntax and options. You can also download the command line reference and usage notes for Robocopy as a PDF file. If you are more comfortable using a graphical user interface, or GUI, rather than the command line, there are a couple of options for adding a GUI to the Robocopy command line tool, making it easier to use. Both tools, RoboMirror and RichCopy, are discussed below and links to download each tool are provided. How to Factory Reset Your Android Phone or Tablet When It Won’t Boot Our Geek Trivia App for Windows 8 is Now Available Everywhere How To Boot Your Android Phone or Tablet Into Safe Mode

    Read the article

  • The fastest way to resize images from ASP.NET. And it’s (more) supported-ish.

    - by Bertrand Le Roy
    I’ve shown before how to resize images using GDI, which is fairly common but is explicitly unsupported because we know of very real problems that this can cause. Still, many sites still use that method because those problems are fairly rare, and because most people assume it’s the only way to get the job done. Plus, it works in medium trust. More recently, I’ve shown how you can use WPF APIs to do the same thing and get JPEG thumbnails, only 2.5 times faster than GDI (even now that GDI really ultimately uses WIC to read and write images). The boost in performance is great, but it comes at a cost, that you may or may not care about: it won’t work in medium trust. It’s also just as unsupported as the GDI option. What I want to show today is how to use the Windows Imaging Components from ASP.NET APIs directly, without going through WPF. The approach has the great advantage that it’s been tested and proven to scale very well. The WIC team tells me you should be able to call support and get answers if you hit problems. Caveats exist though. First, this is using interop, so until a signed wrapper sits in the GAC, it will require full trust. Second, the APIs have a very strong smell of native code and are definitely not .NET-friendly. And finally, the most serious problem is that older versions of Windows don’t offer MTA support for image decoding. MTA support is only available on Windows 7, Vista and Windows Server 2008. But on 2003 and XP, you’ll only get STA support. that means that the thread safety that we so badly need for server applications is not guaranteed on those operating systems. To make it work, you’d have to spin specialized threads yourself and manage the lifetime of your objects, which is outside the scope of this article. We’ll assume that we’re fine with al this and that we’re running on 7 or 2008 under full trust. Be warned that the code that follows is not simple or very readable. This is definitely not the easiest way to resize an image in .NET. Wrapping native APIs such as WIC in a managed wrapper is never easy, but fortunately we won’t have to: the WIC team already did it for us and released the results under MS-PL. The InteropServices folder, which contains the wrappers we need, is in the WicCop project but I’ve also included it in the sample that you can download from the link at the end of the article. In order to produce a thumbnail, we first have to obtain a decoding frame object that WIC can use. Like with WPF, that object will contain the command to decode a frame from the source image but won’t do the actual decoding until necessary. Getting the frame is done by reading the image bytes through a special WIC stream that you can obtain from a factory object that we’re going to reuse for lots of other tasks: var photo = File.ReadAllBytes(photoPath); var factory = (IWICComponentFactory)new WICImagingFactory(); var inputStream = factory.CreateStream(); inputStream.InitializeFromMemory(photo, (uint)photo.Length); var decoder = factory.CreateDecoderFromStream( inputStream, null, WICDecodeOptions.WICDecodeMetadataCacheOnLoad); var frame = decoder.GetFrame(0); We can read the dimensions of the frame using the following (somewhat ugly) code: uint width, height; frame.GetSize(out width, out height); This enables us to compute the dimensions of the thumbnail, as I’ve shown in previous articles. We now need to prepare the output stream for the thumbnail. WIC requires a special kind of stream, IStream (not implemented by System.IO.Stream) and doesn’t directlyunderstand .NET streams. It does provide a number of implementations but not exactly what we need here. We need to output to memory because we’ll want to persist the same bytes to the response stream and to a local file for caching. The memory-bound version of IStream requires a fixed-length buffer but we won’t know the length of the buffer before we resize. To solve that problem, I’ve built a derived class from MemoryStream that also implements IStream. The implementation is not very complicated, it just delegates the IStream methods to the base class, but it involves some native pointer manipulation. Once we have a stream, we need to build the encoder for the output format, which could be anything that WIC supports. For web thumbnails, our only reasonable options are PNG and JPEG. I explored PNG because it’s a lossless format, and because WIC does support PNG compression. That compression is not very efficient though and JPEG offers good quality with much smaller file sizes. On the web, it matters. I found the best PNG compression option (adaptive) to give files that are about twice as big as 100%-quality JPEG (an absurd setting), 4.5 times bigger than 95%-quality JPEG and 7 times larger than 85%-quality JPEG, which is more than acceptable quality. As a consequence, we’ll use JPEG. The JPEG encoder can be prepared as follows: var encoder = factory.CreateEncoder( Consts.GUID_ContainerFormatJpeg, null); encoder.Initialize(outputStream, WICBitmapEncoderCacheOption.WICBitmapEncoderNoCache); The next operation is to create the output frame: IWICBitmapFrameEncode outputFrame; var arg = new IPropertyBag2[1]; encoder.CreateNewFrame(out outputFrame, arg); Notice that we are passing in a property bag. This is where we’re going to specify our only parameter for encoding, the JPEG quality setting: var propBag = arg[0]; var propertyBagOption = new PROPBAG2[1]; propertyBagOption[0].pstrName = "ImageQuality"; propBag.Write(1, propertyBagOption, new object[] { 0.85F }); outputFrame.Initialize(propBag); We can then set the resolution for the thumbnail to be 96, something we weren’t able to do with WPF and had to hack around: outputFrame.SetResolution(96, 96); Next, we set the size of the output frame and create a scaler from the input frame and the computed dimensions of the target thumbnail: outputFrame.SetSize(thumbWidth, thumbHeight); var scaler = factory.CreateBitmapScaler(); scaler.Initialize(frame, thumbWidth, thumbHeight, WICBitmapInterpolationMode.WICBitmapInterpolationModeFant); The scaler is using the Fant method, which I think is the best looking one even if it seems a little softer than cubic (zoomed here to better show the defects): Cubic Fant Linear Nearest neighbor We can write the source image to the output frame through the scaler: outputFrame.WriteSource(scaler, new WICRect { X = 0, Y = 0, Width = (int)thumbWidth, Height = (int)thumbHeight }); And finally we commit the pipeline that we built and get the byte array for the thumbnail out of our memory stream: outputFrame.Commit(); encoder.Commit(); var outputArray = outputStream.ToArray(); outputStream.Close(); That byte array can then be sent to the output stream and to the cache file. Once we’ve gone through this exercise, it’s only natural to wonder whether it was worth the trouble. I ran this method, as well as GDI and WPF resizing over thirty twelve megapixel images for JPEG qualities between 70% and 100% and measured the file size and time to resize. Here are the results: Size of resized images   Time to resize thirty 12 megapixel images Not much to see on the size graph: sizes from WPF and WIC are equivalent, which is hardly surprising as WPF calls into WIC. There is just an anomaly for 75% for WPF that I noted in my previous article and that disappears when using WIC directly. But overall, using WPF or WIC over GDI represents a slight win in file size. The time to resize is more interesting. WPF and WIC get similar times although WIC seems to always be a little faster. Not surprising considering WPF is using WIC. The margin of error on this results is probably fairly close to the time difference. As we already knew, the time to resize does not depend on the quality level, only the size does. This means that the only decision you have to make here is size versus visual quality. This third approach to server-side image resizing on ASP.NET seems to converge on the fastest possible one. We have marginally better performance than WPF, but with some additional peace of mind that this approach is sanctioned for server-side usage by the Windows Imaging team. It still doesn’t work in medium trust. That is a problem and shows the way for future server-friendly managed wrappers around WIC. The sample code for this article can be downloaded from: http://weblogs.asp.net/blogs/bleroy/Samples/WicResize.zip The benchmark code can be found here (you’ll need to add your own images to the Images directory and then add those to the project, with content and copy if newer in the properties of the files in the solution explorer): http://weblogs.asp.net/blogs/bleroy/Samples/WicWpfGdiImageResizeBenchmark.zip WIC tools can be downloaded from: http://code.msdn.microsoft.com/wictools To conclude, here are some of the resized thumbnails at 85% fant:

    Read the article

  • Nautilus crashes after Ubuntu Tweak Package Cleaner [fixed]

    - by Ka7anax
    Few days ago I started having some problems with nautilus. Basically when I'm trying to get into a folder it crashes. It's not happening all the time, but in 85% it does... Sometimes, after the crash all my desktop icons are also gone. The only thing that I think causes this is Ubuntu Tweak - I'm not sure, but the issues started after I did the Package cleaner from Ubuntu Tweaks... Any ideas? ------- EDIT 2 - IMPORTANT !!! ---------- It seems I fixed this problem doing these: 1) I uninstall this nautilus script - http://mundogeek.net/nautilus-scripts/#nautilus-send-gmail 2) I installed nautilus elementary So far is back to normal... If anything bad happens again I will come back! -------- EDIT 1 ---------- First time, after running the command (nautilus --quit; nautilus --no-desktop) 3 times all the system crashed (except the mouse, I could move the mouse). After restart I run it and obtain this: ----- Initializing nautilus-gdu extension Initializing nautilus-dropbox 0.6.7 (nautilus:2966): GConf-CRITICAL **: gconf_value_free: assertion value != NULL' failed (nautilus:2966): GConf-CRITICAL **: gconf_value_free: assertionvalue != NULL' failed Nautilus-Share-Message: Called "net usershare info" but it failed: 'net usershare' returned error 255: net usershare: cannot open usershare directory /var/lib/samba/usershares. Error No such file or directory Please ask your system administrator to enable user sharing. and then this: cristi@cris-laptop:~$ nautilus --quit; nautilus --no-desktop (nautilus:3810): Unique-DBus-WARNING **: Error while sending message: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.

    Read the article

  • OPN Specialized Webcasts - Everything You Need to Know - April 2010

    - by Claudia Costa
    Oracle presents a new series of webcasts hosted by renowned experts that will help you make the most of the exciting New OPN Specialized program. The webcasts will provide you with everything you need to know about "What is the OPN Specialized program?" and "How to become Specialized in the Technology and Application Products", plus there will be a live Question and Answer session. Attend the webcasts and discover how to differentiate your business from the competition with the OPN Specialized Program. Please select the topic and date of your interest and obtain the conference details: ·         What is the OPN Specialized program? March 16th and April 6th 2010: Click here for call details. ·         How to become Specialized in the Technology products portfolio?March 23rd and April 13thClick here for call details. ·         How to become Specialized in the Applications products portfolio? March 30th and April 20thClick here for call details. PARTICIPATE, and take advantage of the special promotional pricing available for a limited time only.  -------------------------------------------------------   Para mais informações por favor contacte: Melissa Lopes 214235194

    Read the article

  • Whole Lotta Virtualization Goin' On

    - by rickramsey
    Lately we've published a lot of content about virtualization. Here's a sampling. Podcat: Technology Preview of Transcendent Memory Turns out that in a virtual environment, RAM is the bottleneck. Not because it's slow, it's not, but because each CPU still had to use its own RAM. Which gets expensive. In this podcast, Dan Magenheimer describes how Oracle and the open source community taught the guest kernel in Oracle Linux to share its memory with other CPU's. Transcendent memory will wind up saving large data centers a lot of money. Find out how. Tech Article: How to Use Oracle VM Templates This article describes how to prepare an Oracle VM environment to use Oracle VM Templates, how to obtain a template, and how to deploy the template to your Oracle VM environment. It also describes how to create a virtual machine based on that template and how you can clone the template and change the clone's configuration. Tech Article: How to Set Up a Load Balanced Application Across Two Oracle Solaris Zones Install Apache Tomcat on two Oracle Solaris zones. Connect them across a VPN. And let the Integrated Load Balancer in Oracle Solaris 11 manage traffic. Presto: high(er) availability in a single server. Tech Article: How to Install Oracle RAC on Oracle Solaris Zone Clusters Learn how to implement a multi-tiered database environment that isolates database tiers and administrative domains, while taking advantage of centralized (and simpler) cluster admin. For fans of Jerry Lee Lewis If you're a fan of Jerry Lee Lewis, you might enjoy this video. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Adaptive Case Management – Exposing the API – part 1 by Roger Goossens

    - by JuergenKress
    One of the most important building blocks of Adaptive Case Management is the ACM API. At one point or another you’re gonna need a way to get information (think about a list of stakeholders, available activities, milestones reached, etc.) out of the case. Since there’s no webservice available yet that exposes the internals of the case, your only option right now is the ACM API. ACM evangelist Niall Commiskey has put some samples online to give you a good feeling of the power of the ACM API. The examples show how you can access the API by means of RMI. You first need to obtain a BPMServiceClientFactory that gives access to the important services you’ll mostly be needing, i.e. the IBPMUserAuthenticationService (needed for obtaining a valid user context) and the ICaseService (the service that exposes all important case information). Now, obtaining an instance of the BPMServiceClientFactory involves some boilerplate coding in which you’ll need the RMI url and user credentials: Read the complete article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Technorati Tags: ACM,API,Adaptive Case Management,Community,Oracle SOA,Oracle BPM,OPN,Jürgen Kress

    Read the article

  • Promoting Organizational Visibility for SOA and SOA Governance Initiatives – Part I by Manuel Rosa and André Sampaio

    - by JuergenKress
    The costs of technology assets can become significant and the need to centralize, monitor and control the contribution of each technology asset becomes a paramount responsibility for many organizations. Through the implementation of various mechanisms, it is possible to obtain a holistic vision and develop synergies between different assets, empowering their re-utilization and analyzing the impact on the organization caused by IT changes. When the SOA domain is considered, the issue of governance should therefore always come into play. Although SOA governance is mandatory to achieve any measure of SOA success, its value still passes incognito in most organizations, mostly due to the lack of visibility and the detached view of the SOA initiatives. There are a number of problems that jeopardize the visibility of these initiatives: Understanding and measuring the value of SOA governance and its contribution – SOA governance tools are too technical and isolated from other systems. They are inadequate for anyone outside of the domain (Business Analyst, Project Managers, or even some Enterprise Architects), and are especially harsh at the CxO level. Lack of information exchange with the business, other operational areas and project management – It is not only a matter of lack of dialog but also the question of using a common vocabulary (textual or graphic) that is adequate for all the stakeholders. We need to generate information that can be useful for a wider scope of stakeholders like Business and enterprise architectures. In this article we describe how an organization can leverage from the existing best practices, and with the help of adequate exploration and communication tools, achieve and maintain the level of quality and visibility that is required for SOA and SOA governance initiatives. Introduction Understanding and implementing effective SOA governance has become a corporate imperative in order to ensure coherence and the attainment of the basic objectives of SOA initiatives: develop the correct services control costs and risks bound to the development process reduce time-to-market Read the full article here. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Facebook Wiki Mix Forum Technorati Tags: SOA Governance,Link Consulting,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • Unity 3D (with Nvidia driver) becomes very slow and laggy

    - by Graham
    How can I prevent my Unity 3D desktop from becoming slow after a while, given that I have an Nvidia Quadro NVS 290 graphics in TwinView mode? The desktop starts out fast on login, but becomes slow / lagging / hesitant / high latency after a while, symptoms being spikes in CPU usage by /usr/bin/X whenever I cause any graphical activity with the mouse or keyboard (e.g. typing, changing tabs, dragging windows). The desktop remains slow even with all windows (except htop in Terminal) and extraneous processes killed. Detail: Changing tabs in Terminal takes about a second, and X spikes to 76% CPU. As I type into Firefox, X spikes to 95% CPU. Dragging Termiinal window, X goes to 70% CPU. Basically, every graphical action sends CPU usage of X through the roof. Device: Nvidia Quadro NVS 290 Driver package: binary driver nvidia-current-updates (280.13-0ubuntu5) Dual Monitors: Pair of DELL UltraSharp 1908FP in TwinView (X screen 2560x1024) OS: Fresh install of Ubuntu 11.10 amd64 Desktop with all updates. Hardware: Dell Precision T5400 Workstation Pastebin of Xorg.0.log Pastebin of xorg.conf Pastebin of nvidia-xconfig -t output (easier to read than xorg.conf) Output of /usr/lib/nux/unity_support_test -p: To obtain the following htop screenshow I typed "asdf" several times in in this text box, alt-tabbed to Terminal and took a screenshot of the high X CPU usage. This also happens when firefox is not running: Quadro NVS 290 has "No" thermal sensor according to sensors-detect: Next adapter: NVIDIA i2c adapter 0 at 2:00.0 (i2c-0) Do you want to scan it? (YES/no/selectively): Client found at address 0x50 Probing for `Analog Devices ADM1033'... No Probing for `Analog Devices ADM1034'... No P robing for `SPD EEPROM'... No Probing for `EDID EEPROM'... Yes (confidence 8, not a hardware monitoring chip) I tried the nouveau driver by disabling the nvidia-current-updates under Additional Drivers, but Ubuntu and xrandr -q fail to detect the second monitor. This may be issue 737349. Funniest thing is that Nouveau wiki says that XRandR 1.2 dual-monitor is supported so it should work with a second monitor.

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >