Search Results

Search found 47615 results on 1905 pages for 'make it useful keep it simple'.

Page 278/1905 | < Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >

  • What are the ways to do synchronized streaming video and text?

    - by Marc
    I would like to have two streams. One being traditional streaming video and another being text. I will also build an interface that allows the user to create the text portion while watching the video. The context of these videos are usually that of an individual being filmed doing a presentation and later the next day for example, a coach will make text annotations (structured data is a plus) with the result being the text stream. It seems this is possible with Silverlight (see the article in the latest MSDN). However, what other methods are there if any? If there any, please give the reasons for why you are recommending them. Thanks. Also, I would prefer something that inst a software as a service hosted solution, but, dont let that keep you from giving an answer. Also keep in mind, the user shouldn't have to do any re-encoding of the video source, the text stream should be separate with someway to synchronize a coach's comments to an arbitrary time stamp in the video.

    Read the article

  • How to refresh to entire device's screen (Windows Mobile)?

    - by walidad
    Hi everybody, I'm working on a simple application that draws an alpha-blended picture on the screen's Device Context every 2 secs, I want to refresh the screen contents before the drawing operation (To erase the drawn pic), I have used many many trick but unfortunately, the screen won't refresh correctly, some regions still keep portions of the drawn pic I'm really frustrated from this issue :( These are the sources codes I have used, I'm using C# SendMessage(HWND_BROADCAST, WM_SYSCOLORCHANGE, IntPtr.Zero, IntPtr.Zero); // wasted time in the refreshing process is enough to keep this. UpdateWindow(HWND_BROADCAST);// does not work at all! InvalidateRect(IntPtr.Zero,IntPtr.Zero,true); // the same goes here. SendMessage(HWND_BROADCAST, WM_PAINT, IntPtr.Zero, IntPtr.Zero); // pfffff ! SendMessage(HWND_BROADCAST, WM_SETTINGCHANGE, new IntPtr(SPI_SETNONCLIENTMETRICS), IntPtr.Zero); // trying to refresh the explorer, no resutl I used also EnumWindows and call back , very slow and does not fit my case. I wanna refresh the whole screen Help please! Regards Waleed

    Read the article

  • Can obfuscation (proguard) lead to MIDlet malfunction?

    - by eMgz
    Hi, Im trying to obfuscate a Java MIDlet with proguard. It runs ok on the PC, however, when I run it on the phone, the program opens, connects to the server, and then freezes. If I disable obfuscation, it runs ok again on the phone. Ive tryed all the obfuscation levels for apps (7, 8 and 9 at NetBeans), and none of them seems to work properly, and I cant release this app for comercial use without obfuscation. Also, the compiler throws some warnings: Note: duplicate definition of library class [java.io.ByteArrayOutputStream] Note: there were 14 duplicate class definitions. But I dont know if this is realy the problem. Does anyone knows what is wrong? The obfuscator arguments are listed below: Obfuscator Arguments (7): -dontusemixedcaseclassnames -default package '' -keep public class ** { public *; } Obfuscator Arguments (8): same as (7) plus -overloadaggressively. Obfuscator Arguments (9): same as (8) but -keep public class ** extends javax.microedition.midlet.MIDlet { public *; } instead. Thanks.

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer

    - by Elton Stoneman
    This is the third in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer As the patterns get further from the simple .NET full-trust consumer, all that changes is the communication protocol and the authentication mechanism. In Part 3 the scenario is that we still have a secure .NET environment consuming our service, so we can store shared keys securely, but the runtime environment is locked down so we can't use Microsoft.ServiceBus to get the nice WCF relay bindings. To support this we will expose a RESTful endpoint through the Azure Service Bus, and require the consumer to send a security token with each HTTP service request. Pattern applicability This is a good fit for scenarios where: the runtime environment is secure enough to keep shared secrets the consumer can execute custom code, including building HTTP requests with custom headers the consumer cannot use the Azure SDK assemblies the service may need to know who is consuming it the service does not need to know who the end-user is Note there isn't actually a .NET requirement here. By exposing the service in a REST endpoint, anything that can talk HTTP can be a consumer. We'll authenticate through ACS which also gives us REST endpoints, so the service is still accessed securely. Our real-world example would be a hosted cloud app, where we we have enough room in the app's customisation to keep the shared secret somewhere safe and to hook in some HTTP calls. We will be flowing an identity through to the on-premise service now, but it will be the service identity given to the consuming app - the end user's identity isn't flown through yet. In this post, we’ll consume the service from Part 1 in ASP.NET using the WebHttpRelayBinding. The code for Part 3 (+ Part 1) is on GitHub here: IPASBR Part 3. Authenticating and authorizing with ACS We'll follow the previous examples and add a new service identity for the namespace in ACS, so we can separate permissions for different consumers (see walkthrough in Part 1). I've named the identity partialTrustConsumer. We’ll be authenticating against ACS with an explicit HTTP call, so we need a password credential rather than a symmetric key – for a nice secure option, generate a symmetric key, copy to the clipboard, then change type to password and paste in the key: We then need to do the same as in Part 2 , add a rule to map the incoming identity claim to an outgoing authorization claim that allows the identity to send messages to Service Bus: Issuer: Access Control Service Input claim type: http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier Input claim value: partialTrustConsumer Output claim type: net.windows.servicebus.action Output claim value: Send As with Part 2, this sets up a service identity which can send messages into Service Bus, but cannot register itself as a listener, or manage the namespace. RESTfully exposing the on-premise service through Azure Service Bus Relay The part 3 sample code is ready to go, just put your Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files.  But to do it yourself is very simple. We already have a WebGet attribute in the service for locally making REST calls, so we are just going to add a new endpoint which uses the WebHttpRelayBinding to relay that service through Azure. It's as easy as adding this endpoint to Web.config for the service:         <endpoint address="https://sixeyed-ipasbr.servicebus.windows.net/rest"                   binding="webHttpRelayBinding"                    contract="Sixeyed.Ipasbr.Services.IFormatService"                   behaviorConfiguration="SharedSecret">         </endpoint> - and adding the webHttp attribute in your endpoint behavior:           <behavior name="SharedSecret">             <webHttp/>             <transportClientEndpointBehavior credentialType="SharedSecret">               <clientCredentials>                 <sharedSecret issuerName="serviceProvider"                               issuerSecret="gl0xaVmlebKKJUAnpripKhr8YnLf9Neaf6LR53N8uGs="/>               </clientCredentials>             </transportClientEndpointBehavior>           </behavior> Where's my WSDL? The metadata story for REST is a bit less automated. In our local webHttp endpoint we've enabled WCF's built-in help, so if you navigate to: http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/help - you'll see the uri format for making a GET request to the service. The format is the same over Azure, so this is where you'll be connecting: https://[your-namespace].servicebus.windows.net/rest/reverse?string=abc123 Build the service with the new endpoint, open that in a browser and you'll get an XML version of an HTTP status code - a 401 with an error message stating that you haven’t provided an authorization header: <?xml version="1.0"?><Error><Code>401</Code><Detail>MissingToken: The request contains no authorization header..TrackingId:4cb53408-646b-4163-87b9-bc2b20cdfb75_5,TimeStamp:10/3/2012 8:34:07 PM</Detail></Error> By default, the setup of your Service Bus endpoint as a relying party in ACS expects a Simple Web Token to be presented with each service request, and in the browser we're not passing one, so we can't access the service. Note that this request doesn't get anywhere near your on-premise service, Service Bus only relays requests once they've got the necessary approval from ACS. Why didn't the consumer need to get ACS authorization in Part 2? It did, but it was all done behind the scenes in the NetTcpRelayBinding. By specifying our Shared Secret credentials in the consumer, the service call is preceded by a check on ACS to see that the identity provided is a) valid, and b) allowed access to our Service Bus endpoint. By making manual HTTP requests, we need to take care of that ACS check ourselves now. We do that with a simple WebClient call to the ACS endpoint of our service; passing the shared secret credentials, we will get back an SWT: var values = new System.Collections.Specialized.NameValueCollection(); values.Add("wrap_name", "partialTrustConsumer"); //service identity name values.Add("wrap_password", "suCei7AzdXY9toVH+S47C4TVyXO/UUFzu0zZiSCp64Y="); //service identity password values.Add("wrap_scope", "http://sixeyed-ipasbr.servicebus.windows.net/"); //this is the realm of the RP in ACS var acsClient = new WebClient(); var responseBytes = acsClient.UploadValues("https://sixeyed-ipasbr-sb.accesscontrol.windows.net/WRAPv0.9/", "POST", values); rawToken = System.Text.Encoding.UTF8.GetString(responseBytes); With a little manipulation, we then attach the SWT to subsequent REST calls in the authorization header; the token contains the Send claim returned from ACS, so we will be authorized to send messages into Service Bus. Running the sample Navigate to http://localhost:2028/Sixeyed.Ipasbr.WebHttpClient/Default.cshtml, enter a string and hit Go! - your string will be reversed by your on-premise service, routed through Azure: Using shared secret client credentials in this way means ACS is the identity provider for your service, and the claim which allows Send access to Service Bus is consumed by Service Bus. None of the authentication details make it through to your service, so your service is not aware who the consumer is (MSDN calls this "anonymous authentication").

    Read the article

  • How do you get your self focused with so many distractions around? (which you can't or don't want to

    - by Teja Kantamneni
    This question is definitely for a programmer and is centric towards a programmer. But if somebody feels it should not belong here I would not mind deleting it. I don't think this need to go as a WIKI, but if feel like it is a WIKI, I can do that too. The Question is: As a programmer you have to keep yourself up to date with the latest technologies and for that every programmer will generally follow some technology blogs and some social networking sites like (twitter, FB, SO, DZONE etc), how to keep your self focused on the things and still want to follow the technology trends? No Subjective or argumentative answers, Just want to know what practices other fellow programmers do for this...

    Read the article

  • DataAdapter Select string from base table schema?

    - by MattSlay
    When I built my .xsd, I had to choose the columns for each table, and it made a schema for the tables, right? So how can I get that Select string to use as a base Select command for new instances of dataadapters, and then just append a Where and OrderBy clause to it as needed? That would keep me from having to keep each DataAdapter's field list (for the same table) in synch with the schema of that table in the .xsd file. Isn't it common to have several DataAdapters that work on a certain table schema, but with different params in the Where and OrderBy clauses? Surely one does not have to maintain (or even redundently build) the field list part of the Select strings for half a dozen DataAdapters that all work off of the same table schema. I'm envisioning something like this pseudo code: BaseSelectString = MyTypedDataSet.JobsTable.GetSelectStringFromSchema() // Is there such a method or technique? WhereClause = " Where SomeField = @Param1 and SomeOtherField = @Param2" OrderByClause = " Order By Field1, Field2" SelectString=BaseSelectString + WhereClause + OrderByClause OleDbDataAdapter adapter = new OleDbDataAdapter(SelectString, MyConn)

    Read the article

  • Oracle Enterprise Manager Ops Center : Using Operational Profiles to Install Packages and other Content

    - by LeonShaner
    Oracle Enterprise Manager Ops Center provides numerous ways to deploy content, such as through OS Update Profiles, or as part of an OS Provisioning plan or combinations of those and other "Install Software" capabilities of Deployment Plans.  This short "how-to" blog will highlight an alternative way to deploy content using Operational Profiles. Usually we think of Operational Profiles as a way to execute a simple "one-time" script to perform a basic system administration function, which can optionally be based on user input; however, Operational Profiles can be much more powerful than that.  There is often more to performing an action than merely running a script -- sometimes configuration files, packages, binaries, and other scripts, etc. are needed to perform the action, and sometimes the user would like to leave such content on the system for later use. For shell scripts and other content written to be generic enough to work on any flavor of UNIX, converting the same scripts and configuration files into Solaris 10 SVR4 package, Solaris 11 IPS package, and/or a Linux RPM's might be seen as three times the work, for little appreciable gain.   That is where using an Operational Profile to deploy simple scripts and other generic content can be very helpful.  The approach is so powerful, that pretty much any kind of content can be deployed using an Operational Profile, provided the files involved are not overly large, and it is not necessary to convert the content into UNIX variant-specific formats. The basic formula for deploying content with an Operational Profile is as follows: Begin with a traditional script header, which is a UNIX shell script that will be responsible for decoding and extracting content, copying files into the right places, and executing any other scripts and commands needed to install and configure that content. Include steps to make the script platform-aware, to do the right thing for a given UNIX variant, or a "sorry" message if the operator has somehow tried to run the Operational Profile on a system where the script is not designed to run.  Ops Center can constrain execution by target type, so such checks at this level are an added safeguard, but also useful with the generic target type of "Operating System" where the admin wants the script to "do the right thing," whatever the UNIX variant. Include helpful output to show script progress, and any other informational messages that can help the admin determine what has gone wrong in the case of a problem in script execution.  Such messages will be shown in the job execution log. Include necessary "clean up" steps for normal and error exit conditions Set non-zero exit codes when appropriate -- a non-zero exit code will cause an Operational Profile job to be marked failed, which is the admin's cue to look into the job details for diagnostic messages in the output from the script. That first bullet deserves some explanation.  If Operational Profiles are usually simple "one-time" scripts and binary content is not allowed, then how does the actual content, packages, binaries, and other scripts get delivered along with the script?  More specifically, how does one include such content without needing to first create some kind of traditional package?   All that is required is to simply encode the content and append it to the end of the Operational Profile.  The header portion of the Operational Profile will need to contain the commands to decode the embedded content that has been appended to the bottom of the script.  The header code can do whatever else is needed, and finally clean up any intermediate files that were created during the decoding and extraction of the content. One way to encode binary and other content for inclusion in a script is to use the "uuencode" utility to convert the content into simple base64 ASCII text -- a form that is suitable to be appended to an Operational Profile.   The behavior of the "uudecode" utility is such that it will skip over any parts of the input that do not fit the uuencoded "begin" and "end" clauses.  For that reason, your header script will be skipped over, and uudecode will find your embedded content, that you will uuencode and paste at the end of the Operational Profile.  You can have as many "begin" / "end" clauses as you need -- just separate each embedded file by an empty line between "begin" and "end" clauses. Example:  Install SUNWsneep and set the system serial number Script:  deploySUNWsneep.sh ( <- right-click / save to download) Highlights: #!/bin/sh # Required variables: OC_SERIAL="$OC_SERIAL" # The user-supplied serial number for the asset ... Above is a good practice, showing right up front what kind of input the Operational Profile will require.   The right-hand side where $OC_SERIAL appears in this example will be filled in by Ops Center based on the user input at deployment time. The script goes on to restrict the use of the program to the intended OS type (Solaris 10 or older, in this example, but other content might be suitable for Solaris 11, or Linux -- it depends on the content and the script that will handle it). A temporary working directory is created, and then we have the command that decodes the embedded content from "self" which in scripting terms is $0 (a variable that expands to the name of the currently executing script): # Pass myself through uudecode, which will extract content to the current dir uudecode $0 At that point, whatever content was appended in uuencoded form at the end of the script has been written out to the current directory.  In this example that yields a file, SUNWsneep.7.0.zip, which the rest of the script proceeds to unzip, and pkgadd, followed by running "/opt/SUNWsneep/bin/sneep -s $OC_SERIAL" which is the command that stores the system serial for future use by other programs such as Explorer.   Don't get hung up on the example having used a pkgadd command.  The content started as a zip file and it could have been a tar.gz, or any other file.  This approach simply decodes the file.  The header portion of the script has to make sense of the file and do the right thing (e.g. it's up to you). The script goes on to clean up after itself, whether or not the above was successful.  Errors are echo'd by the script and a non-zero exit code is set where appropriate. Second to last, we have: # just in case, exit explicitly, so that uuencoded content will not cause error OPCleanUP exit # The rest of the script is ignored, except by uudecode # # UUencoded content follows # # e.g. for each file needed, #  $ uuencode -m {source} {source} > {target}.uu5 # then paste the {target}.uu5 files below # they will be extracted into the workding dir at $TDIR # The commentary above also describes how to encode the content. Finally we have the uuencoded content: begin-base64 444 SUNWsneep.7.0.zip UEsDBBQAAAAIAPsRy0Di3vnukAAAAMcAAAAKABUAcmVhZG1lLnR4dFVUCQADOqnVT7up ... VXgAAFBLBQYAAAAAAgACAJEAAADTNwEAAAA= ==== That last line of "====" is the base64 uuencode equivalent of a blank line, followed by "end" and as mentioned you can have as many begin/end clauses as you need.  Just separate each embedded file by a blank line after each ==== and before each begin-base64. Deploying the example Operational Profile looks like this (where I have pasted the system serial number into the required field): The job succeeded, but here is an example of the kind of diagnostic messages that the example script produces, and how Ops Center displays them in the job details: This same general approach could be used to deploy Explorer, and other useful utilities and scripts. Please let us know what you think?  Until next time...\Leon-- Leon Shaner | Senior IT/Product ArchitectSystems Management | Ops Center Engineering @ Oracle The views expressed on this [blog; Web site] are my own and do not necessarily reflect the views of Oracle. For more information, please go to Oracle Enterprise Manager  web page or  follow us at :  Twitter | Facebook | YouTube | Linkedin | Newsletter

    Read the article

  • No network packets sent immediately after quick physical disconnect and reconnect.

    - by Hans
    I am using Boost's ASIO libraries to establish a UDP connection to a remote server. To make sure the connection is active, every second a keep-alive message is sent to the server. I have noticed that if I unplug the network cable and reinsert it quickly, the first 2 or 3 keep-alive messages after the reinsert are never sent. I tested this by running wire-shark on the server. I have seen it take up to 5 seconds before the client starts sending out network traffic again. The client is running under Linux (2.6.2), if that helps.

    Read the article

  • My code works in Debug mode, but not in Release mode.

    - by Nima
    Hi, I have a code in Visual Studio 2008 in C++ that works with files just by fopen and fclose. Everything works perfect in Debug mode. and I have tested with several datasets. But it doesn't work in release mode. It crashes all the time. I have turned off all the optimizations, also there is no dependency to anything(in the linker), and also I have set these: Optimization: Disabled(/Od) Keep Unreferenced Data. Do Not Remove Redundant Optimize for Windows98: NO I still keep wondering how it should not work under these circumstances. What else should I turn off to let it works as in debug mode? I think if it works in release mode but not in debug mode, it might be a coding fault but the other way looks weird. isn't it? I appreciate any help. --Nima

    Read the article

  • MapRedux - PowerShell and Big Data

    - by Dittenhafer Solutions
    MapRedux – #PowerShell and #Big Data Have you been hearing about “big data”, “map reduce” and other large scale computing terms over the past couple of years and been curious to dig into more detail? Have you read some of the Apache Hadoop online documentation and unfortunately concluded that it wasn't feasible to setup a “test” hadoop environment on your machine? More recently, I have read about some of Microsoft’s work to enable Hadoop on the Azure cloud. Being a "Microsoft"-leaning technologist, I am more inclinded to be successful with experimentation when on the Windows platform. Of course, it is not that I am "religious" about one set of technologies other another, but rather more experienced. Anyway, within the past couple of weeks I have been thinking about PowerShell a bit more as the 2012 PowerShell Scripting Games approach and it occured to me that PowerShell's support for Windows Remote Management (WinRM), and some other inherent features of PowerShell might lend themselves particularly well to a simple implementation of the MapReduce framework. I fired up my PowerShell ISE and started writing just to see where it would take me. Quite simply, the ScriptBlock feature combined with the ability of Invoke-Command to create remote jobs on networked servers provides much of the plumbing of a distributed computing environment. There are some limiting factors of course. Microsoft provided some default settings which prevent PowerShell from taking over a network without administrative approval first. But even with just one adjustment, a given Windows-based machine can become a node in a MapReduce-style distributed computing environment. Ok, so enough introduction. Let's talk about the code. First, any machine that will participate as a remote "node" will need WinRM enabled for remote access, as shown below. This is not exactly practical for hundreds of intended nodes, but for one (or five) machines in a test environment it does just fine. C:> winrm quickconfig WinRM is not set up to receive requests on this machine. The following changes must be made: Set the WinRM service type to auto start. Start the WinRM service. Make these changes [y/n]? y Alternatively, you could take the approach described in the Remotely enable PSRemoting post from the TechNet forum and use PowerShell to create remote scheduled tasks that will call Enable-PSRemoting on each intended node. Invoke-MapRedux Moving on, now that you have one or more remote "nodes" enabled, you can consider the actual Map and Reduce algorithms. Consider the following snippet: $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose Invoke-MapRedux takes an instance of a MapReduceItem which references the Map and Reduce scriptblocks, an array of computer names which are the remote nodes, and the initial data set to be processed. As simple as that, you can start working with concepts of big data and the MapReduce paradigm. Now, how did we get there? I have published the initial version of my PsMapRedux PowerShell Module on GitHub. The PsMapRedux module provides the Invoke-MapRedux function described above. Feel free to browse the underlying code and even contribute to the project! In a later post, I plan to show some of the inner workings of the module, but for now let's move on to how the Map and Reduce functions are defined. Map Both the Map and Reduce functions need to follow a prescribed prototype. The prototype for a Map function in the MapRedux module is as follows. A simple scriptblock that takes one PsObject parameter and returns a hashtable. It is important to note that the PsObject $dataset parameter is a MapRedux custom object that has a "Data" property which offers an array of data to be processed by the Map function. $aMap = { Param ( [PsObject] $dataset ) # Indicate the job is running on the remote node. Write-Host ($env:computername + "::Map"); # The hashtable to return $list = @{}; # ... Perform the mapping work and prepare the $list hashtable result with your custom PSObject... # ... The $dataset has a single 'Data' property which contains an array of data rows # which is a subset of the originally submitted data set. # Return the hashtable (Key, PSObject) Write-Output $list; } Reduce Likewise, with the Reduce function a simple prototype must be followed which takes a $key and a result $dataset from the MapRedux's partitioning function (which joins the Map results by key). Again, the $dataset is a MapRedux custom object that has a "Data" property as described in the Map section. $aReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) # The hashtable to return $redux = @{}; # Return Write-Output $redux; } All Together Now When everything is put together in a short example script, you implement your Map and Reduce functions, query for some starting data, build the MapReduxItem via New-MapReduxItem and call Invoke-MapRedux to get the process started: # Import the MapRedux and SQL Server providers Import-Module "MapRedux" Import-Module “sqlps” -DisableNameChecking # Query the database for a dataset Set-Location SQLSERVER:\sql\dbserver1\default\databases\myDb $query = "SELECT MyKey, Date, Value1 FROM BigData ORDER BY MyKey"; Write-Host "Query: $query" $dataset = Invoke-SqlCmd -query $query # Build the Map function $MyMap = { Param ( [PsObject] $dataset ) Write-Host ($env:computername + "::Map"); $list = @{}; foreach($row in $dataset.Data) { # Write-Host ("Key: " + $row.MyKey.ToString()); if($list.ContainsKey($row.MyKey) -eq $true) { $s = $list.Item($row.MyKey); $s.Sum += $row.Value1; $s.Count++; } else { $s = New-Object PSObject; $s | Add-Member -Type NoteProperty -Name MyKey -Value $row.MyKey; $s | Add-Member -type NoteProperty -Name Sum -Value $row.Value1; $list.Add($row.MyKey, $s); } } Write-Output $list; } $MyReduce = { Param ( [object] $key, [PSObject] $dataset ) Write-Host ($env:computername + "::Reduce - Count: " + $dataset.Data.Count) $redux = @{}; $count = 0; foreach($s in $dataset.Data) { $sum += $s.Sum; $count += 1; } # Reduce $redux.Add($s.MyKey, $sum / $count); # Return Write-Output $redux; } # Create the item data $Mr = New-MapReduxItem "My Test MapReduce Job" $MyMap $MyReduce # Array of processing nodes... $MyNodes = ("node1", "node2", "node3", "node4", "localhost") # Run the Map Reduce routine... $MyMrResults = Invoke-MapRedux -MapReduceItem $Mr -ComputerName $MyNodes -DataSet $dataset -Verbose # Show the results Set-Location C:\ $MyMrResults | Out-GridView Conclusion I hope you have seen through this article that PowerShell has a significant infrastructure available for distributed computing. While it does take some code to expose a MapReduce-style framework, much of the work is already done and PowerShell could prove to be the the easiest platform to develop and run big data jobs in your corporate data center, potentially in the Azure cloud, or certainly as an academic excerise at home or school. Follow me on Twitter to stay up to date on the continuing progress of my Powershell MapRedux module, and thanks for reading! Daniel

    Read the article

  • Odd results when searching for numbers using IXSSO.Query

    - by Vic
    Hi, from classic asp on Windows 2008, using an IXSSO.Query, when searching for a string of numbers, for example 10000000001, I receive results that also include variations to this, like 10000000002 10000000003 and so on. If I change the first digit so the search string is 20000000001 I dont get anything. If I keep moving the last digit from my first example to the left, I keep getting hits until I reach the half way point when I get no results! So in other words 10000020000 will return results like in the first example but 10000200000 does not. This all sounds like to me that its doing a match on the first 6 characters and it ignores the rest... Here is relevant parts of the Set oQuery = Server.CreateObject("IXSSO.Query") oQuery.Catalog = SEARCH_CATALOG oQuery.Query = "@all " & searchstr Set oRS = oQuery.CreateRecordset("nonsequential") Anyone got any ideas and/or suggestions? Thanks, Vic.

    Read the article

  • Design Patterns: What's the antithesis of Front Controller?

    - by Brian Lacy
    I'm familiar with the Front Controller pattern, in which all events/requests are processed through a single centralized controller. But what would you call it when you wish to keep the various parts of an application separate at the presentation layer as well? My first thought was "Facade" but it turns out that's something entirely different. In my particular case, I'm converting an application from a sprawling procedural mess to a clean MVC architecture, but it's a long-term process -- we need to keep things separated as much as possible to facilitate a slow integration with the rest of the system. Our application is web-based, built in PHP, so for instance, we have an "index.php" and an IndexController, a "account.php" and an AccountController, a "dashboard.php" and DashboardController, and so on.

    Read the article

  • Of Datagridviews, databinding, and non validating cell values.

    - by Yanko Hernández Alvarez
    Lets simplify. Lets say I have this class: class T { public string Name { get; set; } public int Age { get; set; } public int height{ get; set; } ... } and I have a DataGridView's DataSource bound to a BindingList <T>, with N columns, each one bound to each property. I need to: Allow the user to enter non validating ages, heights, etc (for instance "aaa") Color the cells with non validating values (red background) Retain the non validating values displayed until the form is closed (I don't want to lose the values entered until the form is closed, so the user has the option to correct the bad cells anytime he wants BEFORE closing the form) Keep the last correct values entered for each cell with non validating values entered. When the form is closed, ditch the non validating values and keep the last correct values entered. Is there any easy way to do this?

    Read the article

  • Store data for songs MySQL DB

    - by Johan
    I'm storing a huge set of songs in a MySQL database. This is what I store in the 'songs' table: CREATE TABLE `songs` ( `song_id` int(10) unsigned NOT NULL auto_increment, `song_artist` varchar(255) NOT NULL, `song_track` varchar(255) NOT NULL, `song_mix` varchar(255) NOT NULL, `song_title` text NOT NULL, `song_hash` varchar(40) NOT NULL, `song_addtime` int(10) unsigned NOT NULL, `song_source` text NOT NULL, `song_file` varchar(255) NOT NULL, PRIMARY KEY (`song_id`) ) ENGINE=MyISAM AUTO_INCREMENT=1857 DEFAULT CHARSET=latin1 Now I'd like to keep track of how many plays each song has, and other song-specific data that relates to the song. I don't want to keep adding fields to the 'songs' table for this. How can I store song related data a more efficient way? What's the best practice here?

    Read the article

  • Should I trust Redis for data integrity?

    - by Jiaji
    In my current project, I have PostgreSQL as my master DB, and Redis as kind of a slave, e.g., when some user adds another as a friend, first the relationship will be stored in PostgreSQL and then a friend list in Redis will be updated. When some user's friend list is requested, it will be pulled out of Redis instead of PostgreSQL. The question is: when I update the friend list in Redis, should I get a fresh copy outof PostgreSQL, and replace the old list in Redis with the new one or should I keep the old list and simply SADD the userid into the list? The latter is of course best for performance, but intuitively the former does a better job in keep the data integrity? And if something like Celery is used, is the second method worth the risk?

    Read the article

  • Routinely sync a branch to master using git rebase

    - by m1755
    I have a Git repository with a branch that hardly ever changes (nobody else is contributing to it). It is basically the master branch with some code and files stripped out. Having this branch around makes it easy for me to package up a leaner version of my project without having to strip out the code and files manually every time. I have been using git rebase to keep this branch up to date with the master but I always get this warning when I try to push the branch after rebasing: To prevent you from losing history, non-fast-forward updates were rejected Merge the remote changes before pushing again. See the 'Note about fast-forwards' section of 'git push --help' for details. I then use git push --force and it works but I feel like this is probably bad practice. I want to keep this branch "in sync" with the master quickly and easily. Is there a better way of handling this task?

    Read the article

  • How to do the following in ListView

    - by Johnny
    How to do the following stuffs in ListView Only show scroll bar when user flip the list. By default, if the list is more than the screen, there is always a scrollbar on the right side. Is there a way to set this scrollbar only shows when user flip the list? Keep showing the list background image when scrolling. I've set an image as the background of the ListView, but when I scroll the list, the background image will disappear and only shows a black list view background. Is there any way to keep showing the list background image when scrolling? Don't show the shadow indicator. When the list has more items to display, there is a black-blur shadow to indicate user that there are more items. Is there a way to remove this item?

    Read the article

  • Create Auto Customization Criteria OAF Search Page

    - by PRajkumar
    1. Create a New Workspace and Project Right click Workspaces and click create new OAworkspace and name it as PRajkumarCustSearch. Automatically a new OA Project will also be created. Name the project as CustSearchDemo and package as prajkumar.oracle.apps.fnd.custsearchdemo   2. Create a New Application Module (AM) Right Click on CustSearchDemo > New > ADF Business Components > Application Module Name -- CustSearchAM Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   3. Enable Passivation for the Root UI Application Module (AM) Right Click on CustSearchAM > Edit SearchAM > Custom Properties > Name – RETENTION_LEVEL Value – MANAGE_STATE Click add > Apply > OK   4. Create Test Table and insert data some data in it (For Testing Purpose)   CREATE TABLE xx_custsearch_demo (   -- ---------------------     -- Data Columns     -- ---------------------     column1                  VARCHAR2(100),     column2                  VARCHAR2(100),     column3                  VARCHAR2(100),     column4                  VARCHAR2(100),     -- ---------------------     -- Who Columns     -- ---------------------     last_update_date    DATE         NOT NULL,     last_updated_by     NUMBER   NOT NULL,     creation_date          DATE         NOT NULL,     created_by               NUMBER   NOT NULL,     last_update_login   NUMBER  );   INSERT INTO xx_custsearch_demo VALUES('v1','v2','v3','v4',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v1','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v2','v3','v4','v5',SYSDATE,0,SYSDATE,0,0); INSERT INTO xx_custsearch_demo VALUES('v3','v4','v5','v6',SYSDATE,0,SYSDATE,0,0); Now we have 4 records in our custom table   5. Create a New Entity Object (EO) Right click on SearchDemo > New > ADF Business Components > Entity Object Name – CustSearchEO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.schema.server Database Objects -- XX_CUSTSEARCH_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key   Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on CustSearchDemo > New > ADF Business Components > View Object Name -- CustSearchVO Package -- prajkumar.oracle.apps.fnd.custsearchdemo.server   In Step2 in Entity Page select CustSearchEO and shuttle them to selected list   In Step3 in Attributes Window select columns Column1, Column2, Column3, Column4, and shuttle them to selected list   In Java page deselect Generate Java file for View Object Class: CustSearchVOImpl and Select Generate Java File for View Row Class: CustSearchVORowImpl   7. Add Your View Object to Root UI Application Module Select Right click on CustSearchAM > Application Modules > Data Model Select CustSearchVO and shuttle to Data Model list   8. Create a New Page Right click on CustSearchDemo > New > Web Tier > OA Components > Page Name -- CustSearchPG Package -- prajkumar.oracle.apps.fnd.custsearchdemo.webui   9. Select the CustSearchPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties: ID -- PageLayoutRN Region Style -- PageLayout AM Definition -- prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Window Title – AutoCustomize Search Page Window Title – AutoCustomization Search Page Auto Footer -- True   11. Add a Query Bean to Your Page Right click on PageLayoutRN > New > Region Select new region region1 and set following properties ID – QueryRN Region Style – query Construction Mode – autoCustomizationCriteria Include Simple Panel – False Include Views Panel – False Include Advanced Panel – False   12. Create a New Region of style table Right Click on QueryRN > New > Region Using Wizard Application Module – prajkumar.oracle.apps.fnd.custsearchdemo.server.CustSearchAM Available View Usages – CustSearchVO1   In Step2 in Region Properties set following properties Region ID – CustSearchTable Region Style – Table   In Step3 in View Attributes shuttle all the items (Column1, Column2, Column3, Column4) available in “Available View Attributes” to Selected View Attributes: In Step4 in Region Items page set style to “messageStyledText” for all items   13. Select CustSearchTable in Structure Panel and set property Width to 100%   14. Include Simple Search Panel Right Click on QueryRN > New > simpleSearchPanel Automatically region2 (header Region) and region1 (MessageComponentLayout Region) created Set Following Properties for region2 Id – SimpleSearchHeader Text -- Simple Search   15. Now right click on message Component Layout Region (SimpleSearchMappings) and create two message text input beans and set the below properties to each   Message TextInputBean1 Id – SearchColumn1 Search Allowed – True Data Type – VARCHAR2 Maximum Length – CSS Class – OraFieldText Prompt – Column1   Message TextInputBean2 Id – SearchColumn2 Search Allowed -- True Data Type – VARCHAR2 Maximum Length – 100 CSS Class – OraFieldText Prompt – Column2   16. Now Right Click on query Components and create simple Search Mappings. Then automatically SimpleSearchMappings and QueryCriteriaMap1 created   17.  Now select the QueryCriteriaMap1 and set the below properties Id – SearchColumn1Map Search Item – SearchColumn1 Result Item – Column1   18. Now again right click on simpleSearchMappings -> New -> queryCriteriaMap, and then set the below properties Id – SearchColumn2Map Search Item – SearchColumn2 Result Item – Column2   19. Congratulation you have successfully finished Auto Customization Search page. Run Your CustSearchPG page and Test Your Work            

    Read the article

  • How to force grails GORM to respect DB scheme ?

    - by fabien-barbier
    I have two domains : class CodeSet { String id String owner String comments String geneRLF String systemAPF static hasMany = [cartridges:Cartridge] static constraints = { id(unique:true,blank:false) } static mapping = { table 'code_set' version false columns { id column:'code_set_id', generator: 'assigned' owner column:'owner' comments column:'comments' geneRLF column:'gene_rlf' systemAPF column:'system_apf' } } and : class Cartridge { String id String code_set_id Date runDate static belongsTo = CodeSet static constraints = { id(unique:true,blank:false) } static mapping = { table 'cartridge' version false columns { id column:'cartridge_id', generator: 'assigned' code_set_id column:'code_set_id' runDate column:'run_date' } } Actually, with those models, I get tables : - code_set, - cartridge, - and table : code_set_cartridge (two fields : code_set_cartridges_id, cartridge_id) I would like to not have code_set_cartridge table, but keep relationship : code_set -- 1:n -- cartridge In other words, how can I keep association between code_set and cartridge without intermediate table ? (using code_set_id as primary key in code_set and code_set_id as foreign key in cartridge). Mapping with GORM can be done without intermediate table?

    Read the article

  • What's the (hidden) cost of lazy val? (Scala)

    - by Jesper
    One handy feature of Scala is lazy val, where the evaluation of a val is delayed until it's necessary (at first access). Ofcourse a lazy val must have some overhead - somewhere Scala must keep track of whether the value has already been evaluated and the evaluation must be synchronized, because multiple threads might try to access the value for the first time at the same time. What exactly is the cost of a lazy val - is there a hidden boolean flag associated with a lazy val to keep track if it has been evaluated or not, what exactly is synchronized and are there any more costs? And a follow-up question: Suppose I do this: class Something { lazy val (x, y) = { ... } } Is this the same as having two separate lazy vals x and y or do I get the overhead only once, for the pair (x, y)?

    Read the article

  • Best Practice For Referencing an External Module In a Java Project

    - by Greg Harman
    I have a Java project that expects external modules to be registered with it. These modules: Implement a particular interface in the main project Are packaged into a uni-jar (along with any dependencies) Contain some human-readable meta-information (like the module name). My main project needs to be able to load at runtime (e.g. using its own classloader) any of these external modules. My question is: what's the best way of registering these modules with the main project (I'd prefer to keep this vanilla Java, and not use any third-party frameworks/libraries for this isolated issue)? My current solution is to keep a single .properties file in the main project with key=name, value=classhuman-readable-name (or coordinate two .properties files in order to avoid the delimiter parsing). At runtime, the main project loads in the .properties file and uses any entries it finds to drive the classloader. This feels hokey to me. Is there a better way to this?

    Read the article

  • Include Files using Wildcard into a folder in Visual Studio

    - by quip
    I am using <ItemGroup> <EmbeddedResource Include="..\..\resources\hbm\*.hbm.xml" /> </ItemGroup> to include a bunch of xml files into my C# project. Works fine. But, I don't want them in the "root level" of my project, I would rather see them in a subfolder in my project. For example, this file is included into a Mapping folder in Visual Studio: <ItemGroup> <EmbeddedResource Include="Mapping\User.hbm.xml" /> </ItemGroup> That's what I want for my *.hbm.xml files. I can't figure out how to do it and still keep my wildcard *.hbm.xml part and also keep the actual files in a different directory. I've looked at MSDN's doc on MSBUILD and items, but no luck.

    Read the article

  • Is it OK to open a DB4o file for query, insert, update multiple times?

    - by Khnle
    This is the way I am thinking of using DB4o. When I need to query, I would open the file, read and close: using (IObjectContainer db = Db4oFactory.OpenFile(Db4oFactory.NewConfiguration(), YapFileName)) { try { List<Pilot> pilots = db.Query<Pilot>().ToList<Pilot>(); } finally { try { db.Close(); } catch (Exception) { }; } } At some later time, when I need to insert, then using (IObjectContainer db = Db4oFactory.OpenFile(Db4oFactory.NewConfiguration(), YapFileName)) { try { Pilot pilot1 = new Pilot("Michael Schumacher", 100); db.Store(pilot1); } finally { try { db.Close(); } catch (Exception) { }; } } In this way, I thought I will keep the file more tidy by only having it open when needed, and have it closed most of the time. But I keep getting InvalidCastException Unable to cast object of type 'Db4objects.Db4o.Reflect.Generic.GenericObject' to type 'Pilot' What's the correct way to use DB4o?

    Read the article

  • Taming the malloc/free beast -- tips & tricks

    - by roufamatic
    I've been using C on some projects for a master's degree but have never built production software with it. (.NET & Javascript are my bread and butter.) Obviously, the need to free() memory that you malloc() is critical in C. This is fine, well and good if you can do both in one routine. But as programs grow, and structs deepen, keeping track of what's been malloc'd where and what's appropriate to free gets harder and harder. I've looked around on the interwebs and only found a few generic recommendations for this. What I suspect is that some of you long-time C coders have come up with your own patterns and practices to simplify this process and keep the evil in front of you. So: how do you recommend structuring your C programs to keep dynamic allocations from becoming memory leaks?

    Read the article

  • calculate intersection between two segments in a symmetric way

    - by Elazar Leibovich
    When using the usual formulas to calculate intersection between two 2D segments, ie here, if you round the result to an integer, you get non-symmetric results. That is, sometimes, due to rounding errors, I get that intersection(A,B)!=intersection(B,A). The best solution is to keep working with floats, and compare the results up to a certain precision. However, I must round the results to integers after calculating the intersection, I cannot keep working with floats. My best solution so far was to use some full order on the segments in the plane, and have intersection to always compare the smaller segment to the larger segment. Is there a better method? Am I missing something?

    Read the article

< Previous Page | 274 275 276 277 278 279 280 281 282 283 284 285  | Next Page >