Search Results

Search found 45063 results on 1803 pages for 'name mangling'.

Page 564/1803 | < Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >

  • Blueprints for Oracle NoSQL Database

    - by dan.mcclary
    I think that some of the most interesting analytic problems are graph problems.  I'm always interested in new ways to store and access graphs.  As such, I really like the work being done by Tinkerpop to create Open Source Software to make property graphs more accessible over a wide variety of datastores.  Since key-value stores like Oracle NoSQL Database are well-suited to storing property graphs, I decided to extend the Blueprints API to work with it.  Below I'll discuss some of the implementation details, but you can check out the finished product here: http://github.com/dwmclary/blueprints-oracle-nosqldb.  What's in a Property Graph?  In the most general sense, a graph is just a collection of vertices and edges.  Vertices and edges can have properties: weights, names, or any number of other traits.  In an undirected graph, edges connect vertices without direction.  A directed graph specifies that all edges have a head and a tail --- a direction.  A multi-graph allows multiple edges to connect two vertices.  A "property graph" encompasses all of these traits. Key-Value Stores for Property Graphs Key-Value stores like Oracle NoSQL Database tend to be ideal for implementing property graphs.  First, if any vertex or edge can have any number of traits, we can treat it as a hash map.  For example: Vertex["name"] = "Mary" Vertex["age"] = 28 Vertex["ID"] = 12345  and so on.  This is a natural key-value relationship: the key "name" maps to the value "Mary."  Moreover if we maintain two hash maps, one for vertex objects and one for edge objects, we've essentially captured the graph.  As such, any scalable key-value store is fertile ground for planting graphs. Oracle NoSQL Database as a Scalable Graph Database While Oracle NoSQL Database offers useful features like tunable consistency, what lends it to storing property graphs is the storage guarantees around its key structure.  Keys in Oracle NoSQL Database are divided into two parts: a major key and a minor key.  The storage guarantee is simple.  Major keys will be distributed across storage nodes, which could encompass a large number of servers.  However, all minor keys which are children of a given major key are guaranteed to be stored on the same storage node.  For example, the vertices: /Personnel/Vertex/1  and /Personnel/Vertex/2 May be stored on different servers, but /Personnel/Vertex/1-/name and  /Personnel/Vertex/1-/age will always be on the same server.  This means that we can structure our graph database such that retrieving all the properties for a vertex or edge requires I/O from only a single storage node.  Moreover, Oracle NoSQL Database provides a storeIterator which allows us to store a huge number of vertices and edges in a scalable fashion.  By storing the vertices and edges as major keys, we guarantee that they are distributed evenly across all storage nodes.  At the same time we can use a partial major key to iterate over all the vertices or edges (e.g. we search over /Personnel/Vertex to iterate over all vertices). Fork It! The Blueprints API and Oracle NoSQL Database present a great way to get started using a scalable key-value database to store and access graph data.  However, a graph store isn't useful without a good graph to work on.  I encourage you to fork or pull the repository, store some data, and try using Gremlin or any other language to explore.

    Read the article

  • JDK 7u25: Solutions to Issues caused by changes to Runtime.exec

    - by Devika Gollapudi
    The following examples were prepared by Java engineering for the benefit of Java developers who may have faced issues with Runtime.exec on the Windows platform. Background In JDK 7u21, the decoding of command strings specified to Runtime.exec(String), Runtime.exec(String,String[]) and Runtime.exec(String,String[],File) methods, has been made more strict. See JDK 7u21 Release Notes for more information. This caused several issues for applications. The following section describes some of the problems faced by developers and their solutions. Note: In JDK 7u25, the system property jdk.lang.Process.allowAmbigousCommands can be used to relax the checking process and helps as a workaround for some applications that cannot be changed. The workaround is only effective for applications that are run without a SecurityManager. See JDK 7u25 Release Notes for more information. Note: To understand the details of the Windows API CreateProcess call, see: http://msdn.microsoft.com/en-us/library/windows/desktop/ms682425%28v=vs.85%29.aspx There are two forms of Runtime.exec calls: with the command as string: "Runtime.exec(String command[, ...])" with the command as string array: "Runtime.exec(String[] cmdarray [, ...] )" The issues described in this section relate to the first form of call. With the first call form, developers expect the command to be passed "as is" to Windows where the command needs be split into its executable name and arguments parts first. But, in accordance with Java API, the command argument is split into executable name and arguments by spaces. Problem 1: "The file path for the command includes spaces" In the call: Runtime.getRuntime().exec("c:\\Program Files\\do.exe") the argument is split by spaces to an array of strings as: c:\\Program, Files\\do.exe The first element of parsed array is interpreted as the executable name, verified by SecurityManager (if present) and surrounded by quotations to avoid ambiguity in executable path. This results in the wrong command: "c:\\Program" "Files\\do.exe" which will fail. Solution: Use the ProcessBuilder class, or the Runtime.exec(String[] cmdarray [, ...] ) call, or quote the executable path. Where it is not possible to change the application code and where a SecurityManager is not used, the Java property jdk.lang.Process.allowAmbigousCommands could be used by setting its value to "true" from the command line: -Djdk.lang.Process.allowAmbigousCommands=true This will relax the checking process to allow ambiguous input. Examples: new ProcessBuilder("c:\\Program Files\\do.exe").start() Runtime.getRuntime().exec(new String[]{"c:\\Program Files\\do.exe"}) Runtime.getRuntime().exec("\"c:\\Program Files\\do.exe\"") Problem 2: "Shell command/.bat/.cmd IO redirection" The following implicit cmd.exe calls: Runtime.getRuntime().exec("dir temp.txt") new ProcessBuilder("foo.bat", "", "temp.txt").start() Runtime.getRuntime().exec(new String[]{"foo.cmd", "", "temp.txt"}) lead to the wrong command: "XXXX" "" temp.txt Solution: To specify the command correctly, use the following options: Runtime.getRuntime().exec("cmd /C \"dir temp.txt\"") new ProcessBuilder("cmd", "/C", "foo.bat temp.txt").start() Runtime.getRuntime().exec(new String[]{"cmd", "/C", "foo.cmd temp.txt"}) or Process p = new ProcessBuilder("cmd", "/C" "XXX").redirectOutput(new File("temp.txt")).start(); Problem 3: "Group execution of shell command and/or .bat/.cmd files" Due to enforced verification procedure, arguments in the following calls create the wrong commands.: Runtime.getRuntime().exec("first.bat && second.bat") new ProcessBuilder("dir", "&&", "second.bat").start() Runtime.getRuntime().exec(new String[]{"dir", "|", "more"}) Solution: To specify the command correctly, use the following options: Runtime.exec("cmd /C \"first.bat && second.bat\"") new ProcessBuilder("cmd", "/C", "dir && second.bat").start() Runtime.exec(new String[]{"cmd", "/C", "dir | more"}) The same scenario also works for the "&", "||", "^" operators of the cmd.exe shell. Problem 4: ".bat/.cmd with special DOS chars in quoted params” Due to enforced verification, arguments in the following calls will cause exceptions to be thrown.: Runtime.getRuntime().exec("log.bat \"error new ProcessBuilder("log.bat", "error Runtime.getRuntime().exec(new String[]{"log.bat", "error Solution: To specify the command correctly, use the following options: Runtime.getRuntime().exec("cmd /C log.bat \"error new ProcessBuilder("cmd", "/C", "log.bat", "error Runtime.getRuntime().exec(new String[]{"cmd", "/C", "log.bat", "error Examples: Complicated redirection for shell construction: cmd /c dir /b C:\ "my lovely spaces.txt" becomes Runtime.getRuntime().exec(new String[]{"cmd", "/C", "dir \b \"my lovely spaces.txt\"" }); The Golden Rule: In most cases, cmd.exe has two arguments: "/C" and the command for interpretation.

    Read the article

  • Using the Script Component as a Conditional Split

    This is a quick walk through on how you can use the Script Component to perform Conditional Split like behaviour, splitting your data across multiple outputs. We will use C# code to decide what does flows to which output, rather than the expression syntax of the Conditional Split transformation. Start by setting up the source. For my example the source is a list of SQL objects from sys.objects, just a quick way to get some data: SELECT type, name FROM sys.objects type name S syssoftobjrefs F FK_Message_Page U Conference IT queue_messages_23007163 Shown above is a small sample of the data you could expect to see. Once you have setup your source, add the Script Component, selecting Transformation when prompted for the type, and connect it up to the source. Now open the component, but don’t dive into the script just yet. First we need to select some columns. Select the Input Columns page and then select the columns we want to uses as part of our filter logic. You don’t need to choose columns that you may want later, this is just the columns used in the script itself. Next we need to add our outputs. Select the Inputs and Outputs page.You get one by default, but we need to add some more, it wouldn’t be much of a split otherwise. For this example we’ll add just one more. Click the Add Output button, and you’ll see a new output is added. Now we need to set some properties, so make sure our new Output 1 is selected. In the properties grid change the SynchronousInputID property to be our input Input 0, and  change the ExclusionGroup property to 1. Now select Ouput 0 and change the ExclusionGroup property to 2. This value itself isn’t important, provided each output has a different value other than zero. By setting this property on both outputs it allows us to split the data down one or the other, making each exclusive. If we left it to 0, that output would get all the rows. It can be a useful feature allowing you to copy selected rows to one output whilst retraining the full set of data in the other. Now we can go back to the Script page and start writing some code. For the example we will do a very simple test, if the value of the type column is U, for user table, then it goes down the first output, otherwise it ends up in the other. This mimics the exclusive behaviour of the conditional split transformation. public override void Input0_ProcessInputRow(Input0Buffer Row) { // Filter all user tables to the first output, // the remaining objects down the other if (Row.type.Trim() == "U") { Row.DirectRowToOutput0(); } else { Row.DirectRowToOutput1(); } } The code itself is very simple, a basic if clause that determines which of the DirectRowToOutput methods we call, there is one for each output. Of course you could write a lot more code to implement some very complex logic, but the final direction is still just a method call. If we now close the script component, we can hook up the outputs and test the package. Your numbers will vary depending on the sample database but as you can see we have clearly split out input data into two outputs. As a final tip, when adding the outputs I would normally rename them, changing the Name in the Properties grid. This means the generated methods follow the pattern as do the path label shown on the design surface, making everything that much easier to recognise.

    Read the article

  • Attachments in Oracle BPM 11g – Create a BPM Process Instance by passing an Attachment

    - by Venugopal Mangipudi
    Problem Statement: On a recent engagement I had  a requirement where we needed to create BPM instances using a message start event. The challenge was that the instance needed to be created after polling a file location and attaching the picked up file (pdf) as an attachment to the instance. Proposed Solution: I was contemplating using process API to accomplish this,but came up with a solution which involves a BPEL process to pickup the file and send a notification to the BPM process by passing the attachment as a payload. The following are some of the brief steps that were used to build the solution: BPM Process to receive an attachment as part of the payload: The BPM Process is a very simple process which has a Message Start event that accepts the attachment as an argument and a Simple User Task that the user can use to view the attachment (as part of the OOTB attachment panel). The Input payload is based on AttachmentPayload.xsd.  The 3 key elements of the the payload are: <xsd:element name="filename" type="xsd:string"/> <xsd:element name="mimetype" type="xsd:string"/> <xsd:element name="content" type="xsd:base64Binary"/> A screenshot of the Human task data assignment that need to performed to attach the file is provided here. Once the process and the UI project (default generated UI) are deployed to the SOA server, copy the wsdl location of the process service (from EM). This WSDL would be used in the BPEL project to create the Instances in the BPM process after a file is polled. BPEL Process to Poll for File and create instances in the BPM process: For the BPEL process a File adapter was configured as a Read service (File Streaming option and keeping the Schema as Opaque). Once a location and the file pattern to poll are provided the Readservice Partner Link was wired to Invoke the BPEL Process. Also, using the BPM Process WSDL, we can create the Webservice reference and can invoke the start operation. Before we do the assignment for the Invoke operation, a global variable should be created to hold the value of the fileName of the file. The mapping to the global variable can be done on the Receive activity properties (jca.file.FileName).  So for the assign operation before we invoke the BPM process service, we can get the content of the file from the receive input variable and the fileName from the jca.file.FileName property. The mimetype needs to be hard coded to the mime-type of the file: application/pdf (I am still researching ways to derive the mime type as it is not available as part of the jca.file properties).  The screenshot of the BPEL process can be found here and the Assign activity can be found here. The project source can be found at the following location. A sample pdf file to test the project and a screenshot of the BPM Human task screen after the successful creation of the instance can be found here. References: [1] https://blogs.oracle.com/fmwinaction/entry/oracle_bpm_adding_an_attachment

    Read the article

  • Lessons Building KeyRef (a .NET developer learning Rails)

    - by Liam McLennan
    Just because I like to build things, and I like to learn, I have been working on a keyboard shortcut reference site. I am using this as an opportunity to improve my ruby and rails skills. The first few days were frustrating. Perhaps the learning curve of all the fun new toys was a bit excessive. Finally tonight things have really started to come together. I still don’t understand the rails built-in testing support but I will get there. Interesting Things I Learned Tonight RubyMine IDE Tonight I switched to RubyMine instead of my usual Notepad++. I suspect RubyMine is a powerful tool if you know how to use it – but I don’t. At the moment it gives me errors about some gems not being activated. This is another one of those things that I will get to. I have also noticed that the editor functions significantly differently to the editors I am used to. For example, in visual studio and notepad++ if you place the cursor at the start of a line and press left arrow the cursor is sent to the end of the previous line. In RubyMine nothing happens. Haml Haml is my favourite view engine. For my .NET work I have been using its non-union Mexican CLR equivalent – nHaml. Multiple CSS Classes To define a div with more than one css class haml lets you chain them together with a ‘.’, such as: .span-6.search_result contents of the div go here Indent Consistency I also learnt tonight that both haml and nhaml complain if you are not consistent about indenting. As a consequence of the move from notepad++ to RubyMine my haml views ended up with some tab indenting and some space indenting. For the view to render all of the indents within a view must be consistent. Sorting Arrays I guessed that ruby would be able to sort an array alphabetically by a property of the elements so my first attempt was: Application.all.sort {|app| app.name} which does not work. You have to supply a comparer (much like .NET). The correct sort is: Application.all.sort {|a,b| a.name.downcase <=> b.name.downcase} MongoMapper Find by Id Since document databases are just fancy key-value stores it is essential to be able to easily search for a document by its id. This functionality is so intrinsic that it seems that the MongoMapper author did not bother to document it. To search by id simply pass the id to the find method: Application.find(‘4c19e8facfbfb01794000002’) Rails And CoffeeScript I am a big fan of CoffeeScript so integrating it into this application is high on my priorities. My first thought was to copy Dr Nic’s strategy. Unfortunately, I did not get past step 1. Install Node.js. I am doing my development on Windows and node is unix only. I looked around for a solution but eventually had to concede defeat… for now. Quicksearch The front page of the application I am building displays a list of applications. When the user types in the search box I want to reduce the list of applications to match their search. A quick googlebing turned up quicksearch, a jquery plugin. You simply tell quicksearch where to get its input (the search textbox) and the list of items to filter (the divs containing the names of applications) and it just works. Here is the code: $('#app_search').quicksearch('.search_result'); Summary I have had a productive evening. The app now displays a list of applications, allows them to be sorted and links through to an application page when an application is selected. Next on the list is to display the set of keyboard shortcuts for an application.

    Read the article

  • Functional/nonfunctional requirements VS design ideas

    - by Nicholas Chow
    Problem domain Functional requirements defines what a system does. Non-Functional requirements defines quality attributes of what the system does as a whole.(performance, security, reliability, volume, useability, etc.) Constraints limits the design space, they restrict designers to certain types of solutions. Solution domain Design ideas , defines how the system does it. For example a stakeholder need might be we want to increase our sales, therefore we must improve the usability of our webshop so more customers will purchase, a requirement can be written for this. (problem domain) Design takes this further into the solution domain by saying "therefore we want to offer credit card payments in addition to the current prepayment option". My problem is that the transition phase from requirement to design seems really vague, therefore when writing requirements I am often confused whether or not I incorporated design ideas in my requirements, that would make my requirement wrong. Another problem is that I often write functional requirements as what a system does, and then I also specify in what timeframe it must be done. But is this correct? Is it then a still a functional requirement or a non functional one? Is it better to seperate it into two distinct requirements? Here are a few requirements I wrote: FR1 Registration of Organizer FR1 describes the registration of an Organizer on CrowdFundum FR1.1 The system shall display a registration form on the website. FR1.2 The system shall require a Name, Username, Document number passport/ID card, Address, Zip code, City, Email address, Telephone number, Bank account, Captcha code on the registration form when a user registers. FR1.4 The system shall display an error message containing: “Registration could not be completed” to the subscriber within 1 seconds after the system check of the registration form was unsuccessful. FR1.5 The system shall send a verification email containing a verification link to the subscriber within 30 seconds after the system check of the registration form was successful. FR1.6 The system shall add the newly registered Organizer to the user base within 5 seconds after the verification link was accessed. FR2 Organizer submits a Project FR2 describes the submission of a Project by an Organizer on CrowdFundum - FR2 The system shall display a submit Project form to the Organizer accounts on the website.< - FR2.3 The system shall check for completeness the Name of the Project, 1-3 Photo’s, Keywords of the Project, Punch line, Minimum and maximum amount of people, Funding threshold, One or more reward tiers, Schedule of when what will be organized, Budget plan, 300-800 Words of additional information about the Project, Contact details within 1 secondin after an Organizer submits the submit Project form. - FR2.8 The system shall add to the homepage in the new Projects category the Project link within 30 seconds after the system made a Project webpage - FR2.9 The system shall include in the Project link for the homepage : Name of the Project, 1 Photo, Punch line within 30 seconds after the system made a Project webpage. Questions: FR 1.1 : Have I incorporated a design idea here, would " the system shall have a registration form" be a better functional requirement? F1.2 ,2.3 : Is this not singular? Would the conditions be better written for each its own separate requirement FR 1.4: Is this a design idea? Is this a correct functional requirement or have I incorporated non functional(performance) in it? Would it be better if I written it like this: FR1 The system shall display an error message when check is unsuccessful. NFR: The system will respond to unsuccesful registration form checks within 1 seconds. Same question with FR 2.8 and 2.9. FR2.3: The system shall check for "completeness", is completeness here used ambigiously? Should I rephrase it? FR1.2: I added that the system shall require a "Captcha code" is this a functional requirement or does it belong to the "security aspect" of a non functional requirement. I am eagerly waiting for your response. Thanks!

    Read the article

  • Tuning Default WorkManager - Advantages and Disadvantages

    - by Murali Veligeti
    Before discussing on Tuning Default WorkManager, lets have a brief introduction on What is Default WorkManger Before Weblogic Server 9.0 release, we had the concept of Execute Queues. WebLogic Server (before WLS 9.0), processing was performed in multiple execute queues. Different classes of work were executed in different queues, based on priority and ordering requirements, and to avoid deadlocks. In addition to the default execute queue, weblogic.kernel.default, there were pre-configured queues dedicated to internal administrative traffic, such as weblogic.admin.HTTP and weblogic.admin.RMI.Users could control thread usage by altering the number of threads in the default queue, or configure custom execute queues to ensure that particular applications had access to a fixed number of execute threads, regardless of overall system load. From WLS 9.0 release onwards WebLogic Server uses is a single thread pool (single thread pool which is called Default WorkManager), in which all types of work are executed. WebLogic Server prioritizes work based on rules you define, and run-time metrics, including the actual time it takes to execute a request and the rate at which requests are entering and leaving the pool.The common thread pool changes its size automatically to maximize throughput. The queue monitors throughput over time and based on history, determines whether to adjust the thread count. For example, if historical throughput statistics indicate that a higher thread count increased throughput, WebLogic increases the thread count. Similarly, if statistics indicate that fewer threads did not reduce throughput, WebLogic decreases the thread count. This new strategy makes it easier for administrators to allocate processing resources and manage performance, avoiding the effort and complexity involved in configuring, monitoring, and tuning custom executes queues. The Default WorkManager is used to handle thread management and perform self-tuning.This Work Manager is used by an application when no other Work Managers are specified in the application’s deployment descriptors. In many situations, the default Work Manager may be sufficient for most application requirements. WebLogic Server’s thread-handling algorithms assign each application its own fair share by default. Applications are given equal priority for threads and are prevented from monopolizing them. The default work-manager, as its name tells, is the work-manager defined by default.Thus, all applications deployed on WLS will use it. But sometimes, when your application is already in production, it's obvious you can't take your EAR / WAR, update the deployment descriptor(s) and redeploy it.The default work-manager belongs to a thread-pool, as initial thread-pool comes with only five threads, that's not much. If your application has to face a large number of hits, you may want to start with more than that.Well, that's quite easy. You have  two option to do so.1) Modify the config.xmlJust add the following line(s) in your server definition : <server> <name>AdminServer</name> <self-tuning-thread-pool-size-min>100</self-tuning-thread-pool-size-min> <self-tuning-thread-pool-size-max>200</self-tuning-thread-pool-size-max> [...] </server> 2) Adding some JVM parameters Add the following system property in setDomainEnv.sh/setDomainEnv.cmd or startWebLogic.sh/startWebLogic.cmd : -Dweblogic.threadpool.MinPoolSize=100 -Dweblogic.threadpool.MaxPoolSize=100 Reboot WLS and see the option has been taken into account . Disadvantage: So far its fine. But here there is an disadvantage in tuning Default WorkManager. Internally Weblogic Server has many work managers configured for different types of work.  if we run out of threads in the self-tuning pool(because of system property -Dweblogic.threadpool.MaxPoolSize) due to being undersized, then important work that WLS might need to do could be starved.  So, while limiting the self-tuning would limit the default WorkManager and internally it also limits all other internal WorkManagers which WLS uses.So the best alternative is to override the default WorkManager that means creating a WorkManager for the Application and assign the WorkManager for the application instead of tuning the Default WorkManager.

    Read the article

  • AWS .NET SDK v2: setting up queues and topics

    - by Elton Stoneman
    Originally posted on: http://geekswithblogs.net/EltonStoneman/archive/2013/10/13/aws-.net-sdk-v2-setting-up-queues-and-topics.aspxFollowing on from my last post, reading from SQS queues with the new SDK is easy stuff, but linking a Simple Notification Service topic to an SQS queue is a bit more involved. The AWS model for topics and subscriptions is a bit more advanced than in Azure Service Bus. SNS lets you have subscribers on multiple different channels, so you can send a message which gets relayed to email address, mobile apps and SQS queues all in one go. As the topic owner, when you request a subscription on any channel, the owner needs to confirm they’re happy for you to send them messages. With email subscriptions, the user gets a confirmation request from Amazon which they need to reply to before they start getting messages. With SQS, you need to grant the topic permission to write to the queue. If you own both the topic and the queue, you can do it all in code with the .NET SDK. Let’s say you want to create a new topic, a new queue as a topic subscriber, and link the two together. Creating the topic is easy with the SNS client (which has an expanded name, AmazonSimpleNotificationServiceClient, compare to the SQS class which is just called QueueClient): var request = new CreateTopicRequest(); request.Name = TopicName; var response = _snsClient.CreateTopic(request); TopicArn = response.TopicArn; In the response from AWS (which I’m assuming is successful), you get an ARN – Amazon Resource Name – which is the unique identifier for the topic. We create the queue using the same code from my last post, AWS .NET SDK v2: the message-pump pattern, and then we need to subscribe the queue to the topic. The topic creates the subscription request: var response = _snsClient.Subscribe(new SubscribeRequest { TopicArn = TopicArn, Protocol = "sqs", Endpoint = _queueClient.QueueArn }); That response will give you an ARN for the subscription, which you’ll need if you want to set attributes like RawMessageDelivery. Then the SQS client needs to confirm the subscription by allowing the topic to send messages to it. The SDK doesn’t give you a nice mechanism for doing that, so I’ve extended my AWS wrapper with a method that encapsulates it: internal void AllowSnsToSendMessages(TopicClient topicClient) { var policy = Policies.AllowSendFormat.Replace("%QueueArn%", QueueArn).Replace("%TopicArn%", topicClient.TopicArn); var request = new SetQueueAttributesRequest(); request.Attributes.Add("Policy", policy); request.QueueUrl = QueueUrl; var response = _sqsClient.SetQueueAttributes(request); } That builds up a policy statement, which gets added to the queue as an attribute, and specifies that the topic is allowed to send messages to the queue. The statement itself is a JSON block which contains the ARN of the queue, the ARN of the topic, and an Allow effect for the sqs:SendMessage action: public const string AllowSendFormat= @"{ ""Statement"": [ { ""Sid"": ""MySQSPolicy001"", ""Effect"": ""Allow"", ""Principal"": { ""AWS"": ""*"" }, ""Action"": ""sqs:SendMessage"", ""Resource"": ""%QueueArn%"", ""Condition"": { ""ArnEquals"": { ""aws:SourceArn"": ""%TopicArn%"" } } } ] }"; There’s a new gist with an updated QueueClient and a new TopicClient here: Wrappers for the SQS and SNS clients in the AWS SDK for .NET v2. Both clients have an Ensure() method which creates the resource, so if you want to create a topic and a subscription you can use:  var topicClient = new TopicClient(“BigNews”, “ImListening”); And the topic client has a Subscribe() method, which calls into the message pump on the queue client: topicClient.Subscribe(x=>Log.Debug(x.Body)); var message = {}; //etc. topicClient.Publish(message); So you can isolate all the fiddly bits and use SQS and SNS with a similar interface to the Azure SDK.

    Read the article

  • CacheAdapter 2.4 – Bug fixes and minor functional update

    - by Glav
    Note: If you are unfamiliar with the CacheAdapter library and what it does, you can read all about its awesome ability to utilise memory, Asp.Net Web, Windows Azure AppFabric and memcached caching implementations via a single unified, simple to use API from here and here.. The CacheAdapter library is receiving an update to version 2.4 and is currently available on Nuget here. Update: The CacheAdapter has actualy just had a minor revision to 2.4.1. This significantly increases the performance and reliability in memcached scenario under more extreme loads. General to moderate usage wont see any noticeable difference though. Bugs This latest version fixes a big that is only present in the memcached implementation and is only seen in rare, intermittent times (making i particularly hard to find). The bug is where a cache node would be removed from the farm when errors in deserialization of cached objects would occur due to serialised data not being read from the stream in entirety. The code also contains enhancements to better surface serialization exceptions to aid in the debugging process. This is also specifically targeted at the memcached implementation. This is important when moving from something like memory or Asp.Web caching mechanisms to memcached where the serialization rules are not as lenient. There are a few other minor bug fixes, code cleanup and a little refactoring. Minor feature addition In addition to this bug fix, many people have asked for a single setting to either enable or disable the cache.In this version, you can disable the cache by setting the IsCacheEnabled flag to false in the application configuration file. Something like the example below: <Glav.CacheAdapter.MainConfig> <setting name="CacheToUse" serializeAs="String"> <value>memcached</value> </setting> <setting name="DistributedCacheServers" serializeAs="String"> <value>localhost:11211</value> </setting> <setting name="IsCacheEnabled" serializeAs="String"> <value>False</value> </setting> </Glav.CacheAdapter.MainConfig> Your reasons to use this feature may vary (perhaps some performance testing or problem diagnosis). At any rate, disabling the cache will cause every attempt to retrieve data from the cache, resulting in a cache miss and returning null. If you are using the ICacheProvider with the delegate/Func<T> syntax to populate the cache, this delegate method will get executed every single time. For example, when the cache is disabled, the following delegate/Func<T> code will be executed every time: var data1 = cacheProvider.Get<SomeData>("cache-key", DateTime.Now.AddHours(1), () => { // With the cache disabled, this data access code is executed every attempt to // get this data via the CacheProvider. var someData = new SomeData() { SomeText = "cache example1", SomeNumber = 1 }; return someData; }); One final note: If you access the cache directly via the ICache instance, instead of the higher level ICacheProvider API, you bypass this setting and still access the underlying cache implementation. Only the ICacheProvider instance observes the IsCacheEnabled setting. Thanks to those individuals who have used this library and provided feedback. Ifyou have any suggestions or ideas, please submit them to the issue register on bitbucket (which is where you can grab all the source code from too)

    Read the article

  • Custom Templates: Using user exits

    - by Anthony Shorten
    One of the features of Oracle Utilities Application Framework V4.1 is the ability to use templates and user exits to extend the base configuration files. The configuration files used by the product are based upon a set of templates shipped with the product. When the configureEnv utility asks for configuration settings they are stored in a configuration file ENVIRON.INI which outlines the environment settings. These settings are then used by the initialSetup utility to populate the various configuration files used by the product using templates located in the templates directory of the installation. Now, whilst the majority of the installations at any site are non-production and the templates provided are generally adequate for that need, there are circumstances where extension of templates are needed to take advantage of more advanced facilities (such as advanced security and environment settings). The issue then becomes that if you alter the configuration files manually (directly or indirectly) then you may lose all your custom settings the next time you run initialSetup. To counter this we allow customers to either override templates with their own template or we now provide user exits in the templates to add fragments of configuration unique to that part of the configuration file. The latter means that the base template is still used but additions are included to provide the extensions. The provision of custom templates is supported but as soon as you use a custom template you are then responsible for reflecting any changes we put in the base template over time. Not a big task but annoying if you have to do it for multiple copies of the product. I prefer to use user exits as they seem to represent the least effort solution. The way to find the user exits available is to either read the Server Administration Guide that comes with your product or look at individual templates and look for the lines: #ouaf_user_exit <user exit name> Where <user exit name> is the name of the user exit. User exits are not always present but are in places that we feel are the most likely to be changed. If a user exit does not exist the you can always use a custom template instead. Now lets show an example. By default, the product generates a config.xml file to be used with Oracle WebLogic. This configuration file has the basic setting contained in it to manage the product. If you want to take advantage of the Oracle WebLogic advanced settings, you can use the console to make those changes and it will be reflected in the config.xml automatically. To retain those changes across invocations of initialSetup, you need to alter the template that generates the config.xml or use user exits. The technique is this. Make the change in the console and when you save the change, WebLogic will reflect it in the config.xml for you. Compare the old version and new version of the config.xml and determine what to add and then find the user exit to put it in by examining the base template. For example, by default, the console is not automatically deployed (it is deployed on demand) in the base config.xml. To make the console deploy, you can add the following line to the templates/CM_config.xml.win.exit_3.include file (for windows) or templates/CM_config.xml.exit_3.include file (for linux/unix): <internal-apps-deploy-on-demand-enabled>false</internal-apps-deploy-on-demand-enabled> Now run initialSetup to reflect the change and if you check the splapp/config/config.xml file you will see the change applied for you. Now how did I know which include file? I check the template for config.xml and found there was an user exit at the right place. I prefixed my include filename with "CM_" to denote it as a custom user exit. This will tell the upgrade tools to leave that file alone whenever you decide to upgrade (or even apply fixes). User exits can be powerful and allow customizations to be added for advanced configuration. You will see products using Oracle Utilities Application Framework use this exits themselves (usually prefixed with the product code). You are also taking advantage of them.

    Read the article

  • How-to remove the close icon from task flows opened in dialogs (11.1.1.4)

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} ADF bounded task flows can be opened in an external dialog and return values to the calling application as documented in chapter 19 of Oracle Fusion Middleware Fusion Developer's Guide for Oracle Application Development Framework11g: http://download.oracle.com/docs/cd/E17904_01/web.1111/b31974/taskflows_dialogs.htm#BABBAFJB   Setting the task flow call activity property Run as Dialog to true and the Display Type property to inline-popup opens the bounded task flow in an inline popup. To launch the dialog, a command item is used that references the control flow case to the task flow call activity <af:commandButton text="Lookup" id="cb6"         windowEmbedStyle="inlineDocument" useWindow="true"         windowHeight="300" windowWidth="300"         action="lookup" partialSubmit="true"/> By default, the dialog opens with a close icon in its header that does not raise a task flow return event when used for dismissing the dialog. In previous releases, the close icon could only be hidden using CSS in a custom skin definition, as explained in a previous OTN Harvest publishing (12/2010) http://www.oracle.com/technetwork/developer-tools/adf/learnmore/dec2010-otn-harvest-199274.pdf As a new feature, Oracle JDeveloper 11g (11.1.1.4) provides an option to globally remove the close icon from inline dialogs without using CSS. For this, the following managed bean definition needs to be added to the adfc-config.xml file. <managed-bean>   <managed-bean-name>     oracle$adfinternal$view$rich$dailogInlineDocument   </managed-bean-name>   <managed-bean-class>java.util.TreeMap</managed-bean-class>   <managed-bean-scope>application</managed-bean-scope>     <map-entries>       <key-class>java.lang.String</key-class>       <value-class>java.lang.String</value-class>       <map-entry>         <key>MODE</key>         <value>withoutCancel</value>       </map-entry>     </map-entries>   </managed-bean> Note the setting of the managed bean scope to be application which applies this setting to all sessions of an application.

    Read the article

  • Sign up Today for User Feedback Sessions at Oracle OpenWorld and JavaOne 2012

    - by Lionel Dubreuil
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 You’re Invited to Sign Up for Oracle Usability Feedback Sessions SIGN UP TODAY to get the most from your conference experience by participating in a usability feedback session where your expertise will help Oracle develop outstanding products and solutions. The Oracle User Experience team is conducting a Usability Evaluation on publishing and accessing Oracle Enterprise Repository content when building SOA projects in JDeveloper. We are asking Developers and Architects who build or integrate applications using SOA Suite to take a look at the interaction between JDeveloper with the Enterprise Repository.  We are looking for feedback on the interaction between JDeveloper and Oracle Enterprise Repository so that we may improve the User Interface in a future release. The feedback sessions will be conducted during the Oracle OpenWorld and JavaOne Conferences, at the Intercontinental Hotel in San Francisco, CA. Sessions will last 1 hour and will be held on Monday, October 1 through Wednesday, October 3, 2012. This event fills up quickly, and space is limited. If you are interested in participating, please send an email to [email protected] with the following information: Identification Name: _________________________________ Company Name:  _________________________ Job Title: Email: Phone Number (work, mobile, include country code): Which conference are you attending? _____Oracle OpenWorld _____JavaOne Have you ever participated in usability activities with Oracle or any of its subsidiaries? ____Yes; specify __________________________________________________ ____No Are you currently using JDeveloper? ____Yes ; specify version(s): _______________________________ ____No How long have you used JDeveloper? ____ Less than 1 year ____ 1 - 2 years ____ 3 - 4 years ____ 4 + years Are you currently using SOA features in JDeveloper? ____Yes ____No How long have you used SOA features in JDeveloper? ____ Less than 1 year ____ 1 - 2 years ____ 3 - 4 years ____ 4 + years How often do you use SOA features in JDeveloper? ____ Daily ____ 2 - 3 times a week ____ Once a week  ____ Once a month or less Briefly describe the types of SOA tasks you use JDeveloper to perform: _____________________________________ _____________________________________ Please list your availability If you know your availability; please let me know which day you would prefer to participate, Monday, Tuesday or Wednesday. Limited sessions are available on each day, and each session lasts 1 hour. Thank you for taking the time to complete this questionnaire.  It will help us match you to the best suited feedback session. Once we receive your email, we will contact you to set up a time and day for participation. You'll find more information about our on-site lab on the VoX (Voice of User Experience) blog, and on our Events page at Usable Apps. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";}

    Read the article

  • What information must never appear in logs?

    - by MainMa
    I'm about to write the company guidelines about what must never appear in logs (trace of an application). In fact, some developers try to include as many information as possible in trace, making it risky to store those logs, and extremely dangerous to submit them, especially when the customer doesn't know this information is stored, because she never cared about this and never read documentation and/or warning messages. For example, when dealing with files, some developers are tempted to trace the names of the files. For example before appending file name to a directory, if we trace everything on error, it will be easy to notice for example that the appended name is too long, and that the bug in the code was to forget to check for the length of the concatenated string. It is helpful, but this is sensitive data, and must never appear in logs. In the same way: Passwords, IP addresses and network information (MAC address, host name, etc.)¹, Database accesses, Direct input from user and stored business data must never appear in trace. So what other types of information must be banished from the logs? Are there any guidelines already written which I can use? ¹ Obviously, I'm not talking about things as IIS or Apache logs. What I'm talking about is the sort of information which is collected with the only intent to debug the application itself, not to trace the activity of untrusted entities. Edit: Thank you for your answers and your comments. Since my question is not too precise, I'll try to answer the questions asked in the comments: What I'm doing with the logs? The logs of the application may be stored in memory, which means either in plain on hard disk on localhost, in a database, again in plain, or in Windows Events. In every case, the concern is that those sources may not be safe enough. For example, when a customer runs an application and this application stores logs in plain text file in temp directory, anybody who has a physical access to the PC can read those logs. The logs of the application may also be sent through internet. For example, if a customer has an issue with an application, we can ask her to run this application in full-trace mode and to send us the log file. Also, some application may sent automatically the crash report to us (and even if there are warnings about sensitive data, in most cases customers don't read them). Am I talking about specific fields? No. I'm working on general business applications only, so the only sensitive data is business data. There is nothing related to health or other fields covered by specific regulations. But thank you to talk about that, I probably should take a look about those fields for some clues about what I can include in guidelines. Isn't it easier to encrypt the data? No. It would make every application much more difficult, especially if we want to use C# diagnostics and TraceSource. It would also require to manage authorizations, which is not the easiest think to do. Finally, if we are talking about the logs submitted to us from a customer, we must be able to read the logs, but without having access to sensitive data. So technically, it's easier to never include sensitive information in logs at all and to never care about how and where those logs are stored.

    Read the article

  • Very slow KVM in Ubuntu 12.04

    - by Guy Fawkes
    I use Ubuntu 12.04 64-bit and KVM, my CPU is Core i5 3.3 GHz and I have 8 GB of DDR3 RAM. I run Windows 7 in KVM and it's extremely slow. My co-worker use Debian on the same PC configuration and can run Windows 7 extremely fast! Where can be my problem? sudo cat /etc/libvirt/qemu/windows.xml <!-- WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE OVERWRITTEN AND LOST. Changes to this xml configuration should be made using: virsh edit windows or other application using the libvirt API. --> <domain type='kvm'> <name>windows</name> <uuid>5c685175-baea-0ca6-591f-8269d923ffb8</uuid> <memory>2097152</memory> <currentMemory>2097152</currentMemory> <vcpu>1</vcpu> <os> <type arch='x86_64' machine='pc-1.0'>hvm</type> <boot dev='hd'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='localtime'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/bin/kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/var/lib/libvirt/images/windows.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <interface type='network'> <mac address='52:54:00:94:63:91'/> <source network='default'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <sound model='ich6'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='vga' vram='262144' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </memballoon> </devices> </domain>

    Read the article

  • Setting up a Carousel Component in ADF Mobile

    - by Shay Shmeltzer
    The Carousel component is one of the slickier ways of showing collections of data, and on a mobile device it works really great with the finger swipe gesture. Using the Carousel component in ADF Mobile is similar to using it in regular web ADF applications, with one major change - right now you can't drag a collection from the data control palette and drop it as a carousel. So here is a quick work around for that, and details about setting up carousels in your application. First thing you'll need is a data control that returns an array of records. In my demo I'm using the Emps collection that you can get from following this tutorial. Then you drag the emps and drop it in your amx page as an ADF mobile iterator. We are doing this as a short cut to getting the right binding needed for a carousel in our page. If you look now in your page's binding you'll see something like this: You can now remark the whole iterator code in your page's source. Next let's add the carousel From the component palette drag the carousel (from the data view category) to the page. Next drag a carousel item and drop it in the nodestamp facet of the carousel. Now we'll hook up the carousel to the binding we got from the iterator - this is quite simple just copy the var and value attributes from the iterator tag to the carousel tag: var="row" value="#{bindings.emps.collectionModel}" Next drop a panelForm, or another layout panel in to the carousel item. Into that panelForm you can now drop items and bind their value property to row.attributeNames - basically copying the way it is in the fields in the iterator for example: value="#{row.hireDate}". By the way you can also copy other attributes like the label. And that's it. Your code should end up looking something like this:     <amx:carousel id="c1" var="row" value="#{bindings.emps.collectionModel}">      <amx:facet name="nodeStamp">        <amx:carouselItem id="ci1">          <amx:panelFormLayout id="pfl1">            <amx:inputText label="#{bindings.emps.hints.salary.label}" value="#{row.salary}" id="it1"/>            <amx:inputText label="#{bindings.emps.hints.name.label}" value="#{row.name}" id="it2"/>          </amx:panelFormLayout>        </amx:carouselItem>      </amx:facet>    </amx:carousel> And when you run your application it will look like this:

    Read the article

  • Schizophrenic Ubuntu 12.10-12.04: Atheros 922 PCI WIFI is disabled in Unity but enabled in terminal - How to getit to work?

    - by zewone
    I am trying to get my PCI Wireless Atheros 922 card to work. It is disabled in Unity: both the network utility and the desktop (see screenshot http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) I tried many different advises on many different forums. Installed 12.10 instead of 12.04, enabled all interfaces... etc. I have read about the aht9 driver... The terminal shows no hw or sw lock for the Atheros card, nevertheless, it is still disabled. Nothing worked so far, the card is still disabled. Any help is much appreciated. Here are more tech details: myuser@adri1:~$ sudo lshw -C network *-network:0 DISABLED description: Wireless interface product: AR922X Wireless Network Adapter vendor: Atheros Communications Inc. physical id: 2 bus info: pci@0000:03:02.0 logical name: wlan1 version: 01 serial: 00:18:e7:cd:68:b1 width: 32 bits clock: 66MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=ath9k driverversion=3.5.0-17-generic firmware=N/A latency=168 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:18 memory:d8000000-d800ffff *-network:1 description: Ethernet interface product: VT6105/VT6106S [Rhine-III] vendor: VIA Technologies, Inc. physical id: 6 bus info: pci@0000:03:06.0 logical name: eth0 version: 8b serial: 00:11:09:a3:76:4a size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=via-rhine driverversion=1.5.0 duplex=half latency=32 link=no maxlatency=8 mingnt=3 multicast=yes port=MII speed=10Mbit/s resources: irq:18 ioport:d300(size=256) memory:d8013000-d80130ff *-network DISABLED description: Wireless interface physical id: 1 bus info: usb@1:8.1 logical name: wlan0 serial: 00:11:09:51:75:36 capabilities: ethernet physical wireless configuration: broadcast=yes driver=rt2500usb driverversion=3.5.0-17-generic firmware=N/A link=no multicast=yes wireless=IEEE 802.11bg myuser@adri1:~$ sudo rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: phy1: Wireless LAN Soft blocked: no Hard blocked: yes 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no myuser@adri1:~$ dmesg | grep wlan0 [ 15.114235] IPv6: ADDRCONF(NETDEV_UP): wlan0: link is not ready myuser@adri1:~$ dmesg | egrep 'ath|firm' [ 14.617562] ath: EEPROM regdomain: 0x30 [ 14.617568] ath: EEPROM indicates we should expect a direct regpair map [ 14.617572] ath: Country alpha2 being used: AM [ 14.617575] ath: Regpair used: 0x30 [ 14.637778] ieee80211 phy0: >Selected rate control algorithm 'ath9k_rate_control' [ 14.639410] Registered led device: ath9k-phy0 myuser@adri1:~$ dmesg | grep wlan1 [ 15.119922] IPv6: ADDRCONF(NETDEV_UP): wlan1: link is not ready myuser@adri1:~$ lspci -nn | grep 'Atheros' 03:02.0 Network controller [0280]: Atheros Communications Inc. AR922X Wireless Network Adapter [168c:0029] (rev 01) myuser@adri1:~$ sudo ifconfig eth0 Link encap:Ethernet HWaddr 00:11:09:a3:76:4a inet addr:192.168.2.2 Bcast:192.168.2.255 Mask:255.255.255.0 inet6 addr: fe80::211:9ff:fea3:764a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:5457 errors:0 dropped:0 overruns:0 frame:0 TX packets:2548 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3425684 (3.4 MB) TX bytes:282192 (282.1 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:590 errors:0 dropped:0 overruns:0 frame:0 TX packets:590 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:53729 (53.7 KB) TX bytes:53729 (53.7 KB) myuser@adri1:~$ sudo iwconfig wlan0 IEEE 802.11bg ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=off Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on lo no wireless extensions. eth0 no wireless extensions. wlan1 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=0 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off myuser@adri1:~$ lsmod | grep "ath9k" ath9k 116549 0 mac80211 461161 3 rt2x00usb,rt2x00lib,ath9k ath9k_common 13783 1 ath9k ath9k_hw 376155 2 ath9k,ath9k_common ath 19187 3 ath9k,ath9k_common,ath9k_hw cfg80211 175375 4 rt2x00lib,ath9k,mac80211,ath myuser@adri1:~$ iwlist scan wlan0 Failed to read scan data : Network is down lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. wlan1 Failed to read scan data : Network is down myuser@adri1:~$ lsb_release -d Description: Ubuntu 12.10 myuser@adri1:~$ uname -mr 3.5.0-17-generic i686 ![Schizophrenic Ubuntu](http://www.amisdurailhalanzy.be/Screenshot%20from%202012-10-25%2013:19:54.png) Any help much appreciated... Thanks, Philippe

    Read the article

  • saving and retrieving a text file in java?

    - by user3319432
    import java.sql. ; import java.awt.; import javax.swing.; import java.awt.event.; public class saving extends JFrame implements ActionListener{ JTextField edpno=new JTextField(10); JLabel l0= new JLabel ("EDP Number: "); JComboBox fname = new JComboBox(); JLabel l1= new JLabel("First Name: "); JTextField lname= new JTextField(20); JLabel l2= new JLabel("Last Name: "); // JTextField contno= new JTextField(20); // JLabel l3= new JLabel("Contact Number: "); JComboBox contno = new JComboBox(); JLabel l3 = new JLabel ("Contact Number: "); JButton bOK = new JButton("Save"); JButton bRetrieve = new JButton("Retrieve"); private ImageIcon icon; JPanel C=new JPanel(){ protected void paintComponent(Graphics g){ g.drawImage(icon.getImage(),0,0,null); super.paintComponent(g); } }; public Search Record (){ icon=new ImageIcon("images/canres.png"); C.setOpaque(false); C.setLayout(new GridLayout(5,2,4,4)); setTitle("Search Record"); C.add (l0); C.add (edpno); edpno.addActionListener(this); C.add (l1); C.add (fname); fname.setForeground(Color.BLUE); fname.setFont(new Font(" ", Font.BOLD,15)); C.add (l2); C.add (lname); C.add (l3); C.add (contno); contno.setForeground(Color.BLUE); contno.setFont(new Font(" ", Font.BOLD,15)); C.add(bOK); bOK.addActionListener(this); C.add (bRetrieve); bRetrieve.addActionListener(this); bOK.setBackground(Color.white); bRetrieve.setBackground(Color.white); } public void saverecord(){ try{ //Connect to the Database Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String path ="jdbc:odbc:;DRIVER=Microsoft Access Driver (*.mdb);DBQ=Database/roomassign.mdb"; String DBPassword =""; String DBUserName =""; Connection con = DriverManager.getConnection(path,"",""); Statement s = con.createStatement(); s.executeQuery("select firstname, Lastname, contact number from name WHERE edpno ='"+edpno.getText()+"'"); ResultSet rs = s.getResultSet(); ResultSetMetaData md = rs.getMetaData(); while(rs.next()) { fname.setSelectedItem(rs.getString(1)); lname.setText(rs.getString(2)); contno.setSelectedItem(rs.getString(3)); // crs.setSelectedItem(rs.getString(4)); } s.close(); con.close(); } catch(Exception Q) { JOptionPane.showMessageDialog(this,Q); } } public void SaveRecord(){ try{ Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String path = "jdbc:odbc:;DRIVER=Microsoft Access Driver (*.mdb);DBQ=Database/roomassign.mdb"; String DBPassword =""; String DBUsername =""; Connection con = DriverManager.getConnection(path,"",""); Statement s = con.createStatement(); String sql = "UPDATE rooms SET Firstname='"+fname.getSelectedItem()+"',Lastname='"+lname.getText()+"',Contactnumber='"+contno.getSelectedItem()+"' WHERE '"+edpno.getText()+"'=edpno"; s.executeUpdate(sql); JOptionPane.showMessageDialog(this,"New room Record has been successfully saved"); dispose(); s.close(); con.close(); } catch(Exception Mismatch){ JOptionPane.showMessageDialog(this,Mismatch); } } public void actionPerformed (ActionEvent ako){ if (ako.getSource() == bRetrieve){ dispose(); } else if (ako.getSource() == bOK){ SaveRecord(); } } public static void main (String [] awtsave){ new Search(); } }

    Read the article

  • Azure Web Sites FTP credentials

    - by Bertrand Le Roy
    A quick tip for all you new enthusiastic users of the amazing new Azure. I struggled for a few minutes finding this, so I thought I’d share. The Azure dashboard doesn’t seem to give easy access to your FTP credentials, and they are not the login and password you use everywhere else. What Azure does give you though is a Publish Profile that you can download: This is a plain XML file that should look something like this: <publishData> <publishProfile profileName="nameofyoursite - Web Deploy" publishMethod="MSDeploy" publishUrl="waws-prod-blu-001.publish.azurewebsites.windows.net:443" msdeploySite="nameofyoursite" userName="$NameOfYourSite" userPWD="sOmeCrYPTicL00kIngStr1nG" destinationAppUrl="http://nameofyoursite.azurewebsites.net" SQLServerDBConnectionString="" mySQLDBConnectionString="" hostingProviderForumLink="" controlPanelLink="http://windows.azure.com"> <databases/> </publishProfile> <publishProfile profileName="nameofyoursite - FTP" publishMethod="FTP" publishUrl="ftp://waws-prod-blu-001.ftp.azurewebsites.windows.net/site/wwwroot" ftpPassiveMode="True" userName="nameofyoursite\$nameofyoursite" userPWD="sOmeCrYPTicL00kIngStr1nG" destinationAppUrl="http://nameofyoursite.azurewebsites.net" SQLServerDBConnectionString="" mySQLDBConnectionString="" hostingProviderForumLink="" controlPanelLink="http://windows.azure.com"> <databases/> </publishProfile> </publishData> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } I’ve highlighted the FTP server name, user name and password. This is what you need to use in Filezilla or whatever you use to access your site remotely. Notice how the password looks encrypted. Well, it’s not really encrypted in fact. This is your password in clear text. It’s just crypto-random gibberish, which is the best kind of password. UPDATE: About 2 minutes after I posted that, David Ebbo mentioned to me on Twitter that if you've configured publishing credentials (for Git typically) those will work too. Don't forget to include the full user name though, which should be of the form nameofthesite\username. The password is the one you defined. That’s it. Enjoy.

    Read the article

  • disks not ready in array causes mdadm to force initramfs shell

    - by RaidPinata
    Okay, this is starting to get pretty frustrating. I've read most of the other answers on this site that have anything to do with this issue but I'm still not getting anywhere. I have a RAID 6 array with 10 devices and 1 spare. The OS is on a completely separate device. At boot only three of the 10 devices in the raid are available, the others become available later in the boot process. Currently, unless I go through initramfs I can't get the system to boot - it just hangs with a blank screen. When I do boot through recovery (initramfs), I get a message asking if I want to assemble the degraded array. If I say no and then exit initramfs the system boots fine and my array is mounted exactly where I intend it to. Here are the pertinent files as near as I can tell. Ask me if you want to see anything else. # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions # CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Tue, 13 Nov 2012 13:50:41 -0700 # by mkconf $Id$ ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae Here is fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # <file system> <mount point> <type> <options> <dump> <pass> # / was on /dev/sdc2 during installation UUID=3fa1e73f-3d83-4afe-9415-6285d432c133 / ext4 errors=remount-ro 0 1 # swap was on /dev/sdc3 during installation UUID=c4988662-67f3-4069-a16e-db740e054727 none swap sw 0 0 # mount large raid device on /data /dev/md0 /data ext4 defaults,nofail,noatime,nobootwait 0 0 output of cat /proc/mdstat Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md0 : active raid6 sda[0] sdd[10](S) sdl[9] sdk[8] sdj[7] sdi[6] sdh[5] sdg[4] sdf[3] sde[2] sdb[1] 23441080320 blocks super 1.2 level 6, 512k chunk, algorithm 2 [10/10] [UUUUUUUUUU] unused devices: <none> Here is the output of mdadm --detail --scan --verbose ARRAY /dev/md0 level=raid6 num-devices=10 metadata=1.2 spares=1 name=Craggenmore:data UUID=37eea980:24df7b7a:f11a1226:afaf53ae devices=/dev/sda,/dev/sdb,/dev/sde,/dev/sdf,/dev/sdg,/dev/sdh,/dev/sdi,/dev/sdj,/dev/sdk,/dev/sdl,/dev/sdd Please let me know if there is anything else you think might be useful in troubleshooting this... I just can't seem to figure out how to change the boot process so that mdadm waits until the drives are ready to build the array. Everything works just fine if the drives are given enough time to come online. edit: changed title to properly reflect situation

    Read the article

  • Wireless doesn't work on a Broadcom BCM4312

    - by Boderick
    As stated, I've just upgraded to 12.04 and my Dell Inspiron 1545 isn't recognising its wireless card and I was wondering if anybody could help? Edit: Okay, so I found the wireless card by using lspci -vvv and it returned this: 0c:00.0 Network controller: Broadcom Corporation BCM4312 802.11b/g LP-PHY (rev 01) Subsystem: Dell Wireless 1397 WLAN Mini-Card Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast TAbort- SERR- Kernel modules: ssb lsmod Module Size Used by dm_crypt 22528 0 joydev 17393 0 dell_wmi 12601 0 sparse_keymap 13658 1 dell_wmi dell_laptop 13671 0 dcdbas 14098 1 dell_laptop psmouse 72919 0 uvcvideo 67203 0 serio_raw 13027 0 videodev 86588 1 uvcvideo snd_hda_codec_idt 60251 1 mac_hid 13077 0 snd_hda_intel 32765 5 snd_hda_codec 109562 2 snd_hda_codec_idt,snd_hda_intel snd_hwdep 13276 1 snd_hda_codec parport_pc 32114 0 rfcomm 38139 0 bnep 17830 2 ppdev 12849 0 snd_pcm 80845 3 snd_hda_intel,snd_hda_codec bluetooth 158438 10 rfcomm,bnep snd_seq_midi 13132 0 snd_rawmidi 25424 1 snd_seq_midi snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq binfmt_misc 17292 1 snd 62064 18 snd_hda_codec_idt,snd_hda_intel,snd_hda_codec,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 14635 1 snd snd_page_alloc 14108 2 snd_hda_intel,snd_pcm lp 17455 0 parport 40930 3 parport_pc,ppdev,lp sky2 53628 0 ums_realtek 17920 0 uas 17699 0 i915 414603 3 wmi 18744 1 dell_wmi drm_kms_helper 45466 1 i915 drm 197692 4 i915,drm_kms_helper i2c_algo_bit 13199 1 i915 video 19068 1 i915 usb_storage 39646 1 ums_realtek ifconfig -a eth0 Link encap:Ethernet HWaddr 00:23:ae:24:71:45 inet addr:192.168.1.158 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::223:aeff:fe24:7145/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:14340 errors:0 dropped:0 overruns:0 frame:0 TX packets:10191 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:15403754 (15.4 MB) TX bytes:1262570 (1.2 MB) Interrupt:18 ham0 Link encap:Ethernet HWaddr 7a:79:05:2d:b0:f7 inet addr:5.45.176.247 Bcast:5.255.255.255 Mask:255.0.0.0 inet6 addr: fe80::7879:5ff:fe2d:b0f7/64 Scope:Link inet6 addr: 2620:9b::52d:b0f7/96 Scope:Global UP BROADCAST RUNNING MULTICAST MTU:1404 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:179 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:0 (0.0 B) TX bytes:27480 (27.4 KB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:433 errors:0 dropped:0 overruns:0 frame:0 TX packets:433 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:60051 (60.0 KB) TX bytes:60051 (60.0 KB) iwconfig lo no wireless extensions. ham0 no wireless extensions. eth0 no wireless extensions. the results for sudo lshw -class network *-network description: Wireless interface product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:0c:00.0 logical name: eth1 version: 01 serial: 00:22:5f:77:1f:e6 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11bg resources: irq:17 memory:f69fc000-f69fffff *-network description: Ethernet interface product: 88E8040 PCI-E Fast Ethernet Controller vendor: Marvell Technology Group Ltd. physical id: 0 bus info: pci@0000:09:00.0 logical name: eth0 version: 13 serial: 00:23:ae:24:71:45 size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=sky2 driverversion=1.30 duplex=full firmware=N/A ip=192.168.1.158 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:45 memory:f68fc000-f68fffff ioport:de00(size=256) *-network description: Ethernet interface physical id: 2 logical name: ham0 serial: 7a:79:05:2d:b0:f7 size: 10Mbit/s capabilities: ethernet physical configuration: autonegotiation=off broadcast=yes driver=tun driverversion=1.6 duplex=full firmware=N/A ip=5.45.176.247 link=yes multicast=yes port=twisted pair speed=10Mbit/s and the results of rfkill list all 0: brcmwl-0: Wireless LAN Soft blocked: yes Hard blocked: yes 1: dell-wifi: Wireless LAN Soft blocked: yes Hard blocked: yes

    Read the article

  • Syntax of passing lambda

    - by Astara
    Right now, I'm working on refactoring a program that calls its parts by polling to a more event-driven structure. I've created sched and task classes with the sced to become a base class of the current main loop. The tasks will be created for each meter so they can be called off of that instead of polling. Each of the events main calls are a type of meter that gather info and display it. When the program is coming up, all enabled meters get 'constructed' by a main-sub. In that sub, I want to store off the "this" pointer associated with the meter, as well as the common name for the "action routine. void MeterMaker::Meter_n_Task (Meter * newmeter,) { push(newmeter); // handle non-timed draw events Task t = new Task(now() + 0.5L); t.period={0,1U}; t.work_meter = newmeter; t.work = [&newmeter](){newmeter.checkevent();};<<--attempt at lambda t.flags = T_Repeat; t.enable_task(); _xos->sched_insert(t); } A sample call to it: Meter_n_Task(new CPUMeter(_xos, "CPU ")); 've made the scheduler a base class of the main routine (that handles the loop), and I've tried serveral variations to get the task class to be a base of the meter class, but keep running into roadblocks. It's alot like "whack-a-mole" -- pound in something to fix something one place, and then a new probl pops out elsewhere. Part of the problem, is that the sched.h file that is trying to hold the Task Q, includes the Task header file. The task file Wants to refer to the most "base", Meter class. The meter class pulls in the main class of the parent as it passes a copy of the parent to the children so they can access the draw routines in the parent. Two references in the task file are for the 'this' pointer of the meter and the meter's update sub (to be called via this). void *this_data= NULL; void (*this_func)() = NULL; Note -- I didn't really want to store these in the class, as I wanted to use a lamdba in that meter&task routine above to store a routine+context to be used to call the meter's action routine. Couldn't figure out the syntax. But am running into other syntax problems trying to store the pointers...such as g++: COMPILE lsched.cc In file included from meter.h:13:0, from ltask.h:17, from lsched.h:13, from lsched.cc:13: xosview.h:30:47: error: expected class-name before ‘{’ token class XOSView : public XWin, public Scheduler { Like above where it asks for a class, where the classname "Scheduler" is. !?!? Huh? That IS a class name. I keep going in circles with things that don't make sense... Ideally I'd get the lamba to work right in the Meter_n_Task routine at the top. I wanted to only store 1 pointer in the 'Task' class that was a pointer to my lambda that would have already captured the "this" value ... but couldn't get that syntax to work at all when I tried to start it into a var in the 'Task' class. This project, FWIW, is my teething project on the new C++... (of course it's simple!.. ;-))... I've made quite a bit of progress in other areas in the code, but this lambda syntax has me stumped...its at times like thse that I appreciate the ease of this type of operation in perl. Sigh. Not sure the best way to ask for help here, as this isn't a simple question. But thought I'd try!... ;-) Too bad I can't attach files to this Q.

    Read the article

  • A Reusable Builder Class for .NET testing

    - by Liam McLennan
    When writing tests, other than end-to-end integration tests, we often need to construct test data objects. Of course this can be done using the class’s constructor and manually configuring the object, but to get many objects into a valid state soon becomes a large percentage of the testing effort. After many years of painstakingly creating builders for each of my domain objects I have finally become lazy enough to bother to write a generic, reusable builder class for .NET. To use it you instantiate a instance of the builder and configuring it with a builder method for each class you wish it to be able to build. The builder method should require no parameters and should return a new instance of the type in a default, valid state. In other words the builder method should be a Func<TypeToBeBuilt>. The best way to make this clear is with an example. In my application I have the following domain classes that I want to be able to use in my tests: public class Person { public string Name { get; set; } public int Age { get; set; } public bool IsAndroid { get; set; } } public class Building { public string Street { get; set; } public Person Manager { get; set; } } The builder for this domain is created like so: build = new Builder(); build.Configure(new Dictionary<Type, Func<object>> { {typeof(Building), () => new Building {Street = "Queen St", Manager = build.A<Person>()}}, {typeof(Person), () => new Person {Name = "Eugene", Age = 21}} }); Note how Building depends on Person, even though the person builder method is not defined yet. Now in a test I can retrieve a valid object from the builder: var person = build.A<Person>(); If I need a class in a customised state I can supply an Action<TypeToBeBuilt> to mutate the object post construction: var person = build.A<Person>(p => p.Age = 99); The power and efficiency of this approach becomes apparent when your tests require larger and more complex objects than Person and Building. When I get some time I intend to implement the same functionality in Javascript and Ruby. Here is the full source of the Builder class: public class Builder { private Dictionary<Type, Func<object>> defaults; public void Configure(Dictionary<Type, Func<object>> defaults) { this.defaults = defaults; } public T A<T>() { if (!defaults.ContainsKey(typeof(T))) throw new ArgumentException("No object of type " + typeof(T).Name + " has been configured with the builder."); T o = (T)defaults[typeof(T)](); return o; } public T A<T>(Action<T> customisation) { T o = A<T>(); customisation(o); return o; } }

    Read the article

  • WiFi stops working after a while in Lenovo ThinkPad W520 (Ubuntu 12.04)

    - by el10780
    After several minutes(I do not know how many) there is no internet connection on my laptop via Wi-Fi.Ubuntu doesn't show any kind of message that my WiFi was disconnected neither there is a signal drop,but suddenly Firefox stops connecting to web pages.I checked my modem/router and it seems that it is working fine.I tried also to reboot the WiFi device and nothing happens.The only thing that it makes it work again is a reboot of the system and if I do not want to do a reboot then I am enforced to connect to the Internet using Ethernet cable.Does anybody know what is happening? ## Some Hardware info that might be helpful ## el10780@ThinkPad-W520:~$ sudo lshw -class network *-network description: Ethernet interface product: 82579LM Gigabit Network Connection vendor: Intel Corporation physical id: 19 bus info: pci@0000:00:19.0 logical name: eth0 version: 04 serial: f0:de:f1:f1:be:10 size: 100Mbit/s capacity: 1Gbit/s width: 32 bits clock: 33MHz capabilities: pm msi bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=e1000e driverversion=1.5.1-k duplex=full firmware=0.13-3 ip=192.168.0.10 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:50 memory:f3a00000-f3a1ffff memory:f3a2b000-f3a2bfff ioport:6080(size=32) *-network description: Wireless interface product: Centrino Advanced-N + WiMAX 6250 vendor: Intel Corporation physical id: 0 bus info: pci@0000:03:00.0 logical name: wlan0 version: 5e serial: 64:80:99:63:14:74 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.2.0-26-generic firmware=41.28.5.1 build 33926 ip=192.168.0.6 latency=0 link=yes multicast=yes wireless=IEEE 802.11abgn resources: irq:52 memory:f3900000-f3901fff *-network description: Ethernet interface physical id: 1 bus info: usb@2:1.3 logical name: wmx0 serial: 00:1d:e1:53:b2:e8 capabilities: ethernet physical configuration: driver=i2400m firmware=i6050-fw-usb-1.5.sbcf link=no el10780@ThinkPad-W520:~$ lspci 00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09) 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09) 00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09) 00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04) 00:16.3 Serial controller: Intel Corporation 6 Series/C200 Series Chipset Family KT Controller (rev 04) 00:19.0 Ethernet controller: Intel Corporation 82579LM Gigabit Network Connection (rev 04) 00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 04) 00:1b.0 Audio device: Intel Corporation 6 Series/C200 Series Chipset Family High Definition Audio Controller (rev 04) 00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b4) 00:1c.1 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 2 (rev b4) 00:1c.3 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 4 (rev b4) 00:1c.4 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 5 (rev b4) 00:1c.6 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 7 (rev b4) 00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 04) 00:1f.0 ISA bridge: Intel Corporation QM67 Express Chipset Family LPC Controller (rev 04) 00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family 6 port SATA AHCI Controller (rev 04) 00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 04) 01:00.0 VGA compatible controller: NVIDIA Corporation GF108 [Quadro 1000M] (rev a1) 03:00.0 Network controller: Intel Corporation Centrino Advanced-N + WiMAX 6250 (rev 5e) 0d:00.0 System peripheral: Ricoh Co Ltd Device e823 (rev 08) 0d:00.3 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 PCIe IEEE 1394 Controller (rev 04) 0e:00.0 USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) el10780@ThinkPad-W520:~$ rfkill list all 0: hci0: Bluetooth Soft blocked: no Hard blocked: no 1: tpacpi_bluetooth_sw: Bluetooth Soft blocked: no Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no 3: i2400m-usb:2-1.3:1.0: WiMAX Soft blocked: yes Hard blocked: no The weirdest thing is this screenshot which I took after running the **Additional Drivers** program.I mean I have a NVidia Quadro 1000M and my Intel Centrino WiFi Card and this shows that there are not proprietay drivers for my system. http://imageshack.us/photo/my-images/268/screenshotfrom201207062.png/

    Read the article

  • Deploying Oracle ADF Essentials Applications to Glassfish

    - by Shay Shmeltzer
    With the new Oracle ADF Essentials offering you can now deploy applications that leverage Oracle ADF on the open source Glassfish 3.1 server. Deployment is documented in the official JDeveloper and ADF documentation (here) but below is a summary of the steps and a video of the steps you'll need to take to get a basic Oracle ADF Essentials application to work on GlassFish. Note - to make starting/stopping GlassFish easier for my demo I used my GlassFish extension that you can get here. First we'll install some ADF Runtime libraries on GlassFish Download and install Glassfish (Note - if you also have an Oracle DB on the same machine, you'll want to switch GlassFish's HTTP port to something else instead of 8080). Download the Oracle ADF Essentials packaging - this will get you an adf_essentials.zip file. Copy the adf_essentials.zip to the lib directory of your Glassfish domain - on a default windows install this would be: C:\glassfish3\glassfish\domains\domain1\lib Go the the above lib directory and issue a unzip -j adf_essentials.zip This will extract the ADF libraries to the directory. Now you can start the Glassfish server. Now let's configure Glassfish to handle applications of the ADF type: Invoke the admin console of glassfish (http://localhost:4848) and log into your admin account. Go to Configurations->Server-config->JVM Settings and choose the JVM Options tab Add the following entries: -XX:MaxPermSize=512m (note this entry should already exist so just make sure it has a big enough value) -Doracle.mds.cache=simple While we are in the admin console, we can also define JDBC connections that will be used by our application. Go into Resources->JDBC->JDBC Connection Pools and click to create a New one Give it a name and choose the resource type to be javax.sql.XADataSource and choose Oracle as the Database Driver vendor. Click Next Scroll down to the Additional Properties section and start filling in the information for your database. The values for an Oracle XE will be (user=hr, databaseName = XE, Password=hr, ServerName=localhost, DriverType=thin, PortNumber=1521) Click Finish Click Ping to check your connection works. Now define a new JDBC Resource that will use the pool you just defined. In my example I called the resource jdbc/HRDS You will need this name to match the name in your Application Module connection configuraiton.Now you can re-start the Glassfish server for the changes to take effect. Get an ADF application going (you can use the regular Fusion Application template for this) Go into the project properties of your viewController project, under the deployment section click to edit the deployment profile that is defined there. Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. Go to Application -> Application Properties-> Deployment Go to Platform and choose Glassfish 3.1 from the drop down list. Click ok to go back to your project. This step will make sure that JDeveloper will autoamtically add the necessary ADF libraries to the EAR file that is being generated for deployment on Glassfish  Go to your Application->Deploy and deploy either to an EAR file or directly to a Glassfish server connection that you created. Things should just work, but if they don't then look up the server.log in the log directory and check out what error is in there. Here is a video demo of the various steps: Note - right now the deployment of an ADF application takes about 2 minutes on my machine we are hoping to be able to improve this timing in the future. People who are more familiar with Glassfish might want to explore using exploded directory deployment and see if they can get it to work.

    Read the article

  • Fetching Partition Information

    - by Mike Femenella
    For a recent SSIS package at work I needed to determine the distinct values in a partition, the number of rows in each partition and the file group name on which each partition resided in order to come up with a grouping mechanism. Of course sys.partitions comes to mind for some of that but there are a few other tables you need to link to in order to grab the information required. The table I’m working on contains 8.8 billion rows. Finding the distinct partition keys from this table was not a fast operation. My original solution was to create  a temporary table, grab the distinct values for the partitioned column, then update via sys.partitions for the rows and the $partition function for the partitionid and finally look back to the sys.filegroups table for the filegroup names. It wasn’t pretty, it could take up to 15 minutes to return the results. The primary issue is pulling distinct values from the table. Queries for distinct against 8.8 billion rows don’t go quickly. A few beers into a conversation with a friend and we ended up talking about work which led to a conversation about the task described above. The solution was already built in SQL Server, just needed to pull it together. The first table I needed was sys.partition_range_values. This contains one row for each range boundary value for a partition function. In my case I have a partition function which uses dayid values. For example July 4th would be represented as an int, 20130704. This table lists out all of the dayid values which were defined in the function. This eliminated the need to query my source table for distinct dayid values, everything I needed was already built in here for me. The only caveat was that in my SSIS package I needed to create a bucket for any dayid values that were out of bounds for my function. For example if my function handled 20130501 through 20130704 and I had day values of 20130401 or 20130705 in my table, these would not be listed in sys.partition_range_values. I just created an “everything else” bucket in my ssis package just in case I had any dayid values unaccounted for. To get the number of rows for a partition is very easy. The sys.partitions table contains values for each partition. Easy enough to achieve by querying for the object_id and index value of 1 (the clustered index) The final piece of information was the filegroup name. There are 2 options available to get the filegroup name, sys.data_spaces or sys.filegroups. For my query I chose sys.filegroups but really it’s a matter of preference and data needs. In order to bridge between sys.partitions table and either sys.data_spaces or sys.filegroups you need to get the container_id. This can be done by joining sys.allocation_units.container_id to the sys.partitions.hobt_id. sys.allocation_units contains the field data_space_id which then lets you join in either sys.data_spaces or sys.file_groups. The end result is the query below, which typically executes for me in under 1 second. I’ve included the join to sys.filegroups and to sys.dataspaces, and I’ve  just commented out the join sys.filegroups. As I mentioned above, this shaves a good 10-15 minutes off of my original ssis package and is a really easy tweak to get a boost in my ETL time. Enjoy.

    Read the article

< Previous Page | 560 561 562 563 564 565 566 567 568 569 570 571  | Next Page >