Search Results

Search found 67143 results on 2686 pages for 'complex data types'.

Page 92/2686 | < Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >

  • Complex SQL Query similar to a z order problem

    - by AaronLS
    I have a complex SQL problem in MS SQL Server, and in drawing on a piece of paper I realized that I could think of it as a single bar filled with rectangles, each rectangle having segments with different Z orders. In reality it has nothing to do with z order or graphics at all, but more to do with some complex business rules that would be difficult to explain. Howoever, if anyone has ideas on how to solve the below that will give me my solution. I have the following data: ObjectID, PercentOfBar, ZOrder (where smaller is closer) A, 100, 6 B, 50, 5 B, 50, 4 C, 30, 3 C, 70, 6 The result of my query that I want is this, in any order: PercentOfBar, ZOrder 50, 5 20, 4 30, 3 Think of it like this, if I drew rectangle A, it would fill 100% of the bar and have a z order of 6. 66666666666 AAAAAAAAAAA If I then layed out rectangle B, consisting of two segments, both segments would cover up rectangle A resulting in the following rendering: 4444455555 BBBBBBBBBB As a rule of thumb, for a given rectangle, it's segments should be layed out such that the highest z order is to the right of the lower Z orders. Finally rectangle C would cover up only portions of Rectangle B with it's 30% segment that is z order 3, which would be on the left. You can hopefully see how the is represented in the output dataset I listed above: 3334455555 CCCBBBBBBB Now to make things more complicated I actually have a 4th column such that this grouping occurs for each key: Input: SomeKey, ObjectID, PercentOfBar, ZOrder (where smaller is closer) X, A, 100, 6 X, B, 50, 5 X, B, 50, 4 X, C, 30, 3 X, C, 70, 6 Y, A, 100, 6 Z, B, 50, 2 Z, B, 50, 6 Z, C, 100, 5 Output: SomeKey, PercentOfBar, ZOrder X, 50, 5 X, 20, 4 X, 30, 3 Y, 100, 6 Z, 50, 2 Z, 50, 5 Notice in the output, the PercentOfBar for each SomeKey would add up to 100%. This is one I know I'm going to be thinking about when I go to bed tonight. Just to be explicit and have a question: What would be a query that would produce the results described above?

    Read the article

  • How can I get PowerShell Added-Types to use Added References

    - by Scott Weinstein
    I'm working on a PoSh project that generates CSharp code, and then Add-Types it into memory. The new types use existing types in an on disk DLL, which is loaded via Add-Type. All is well and good untill I actualy try to invoke methods on the new types. Here's an example of what I'm doing: $PWD = "." rm -Force $PWD\TestClassOne* $code = " namespace TEST{ public class TestClassOne { public int DoNothing() { return 1; } } }" $code | Out-File tcone.cs Add-Type -OutputAssembly $PWD\TestClassOne.dll -OutputType Library -Path $PWD\tcone.cs Add-Type -Path $PWD\TestClassOne.dll $a = New-Object TEST.TestClassOne "Using TestClassOne" $a.DoNothing() "Compiling TestClassTwo" Add-Type -Language CSharpVersion3 -TypeDefinition " namespace TEST{ public class TestClassTwo { public int CallTestClassOne() { var a = new TEST.TestClassOne(); return a.DoNothing(); } } }" -ReferencedAssemblies $PWD\TestClassOne.dll "OK" $b = New-Object TEST.TestClassTwo "Using TestClassTwo" $b.CallTestClassOne() Running the above script gives the following error on the last line: Exception calling "CallTestClassOne" with "0" argument(s): "Could not load file or assembly 'TestClassOne,...' or one of its dependencies. The system cannot find the file specified." At AddTypeTest.ps1:39 char:20 + $b.CallTestClassOne <<<< () + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException What am I doing wrong?

    Read the article

  • wsdl return an array of complex types

    - by Anand
    hi, I have defined a web service that will return the data from my mysql data base. I have written the web service in php. Now I have defined a complex type as follows: $server->wsdl->addComplexType( 'Category', 'complexType', 'struct', 'all', '', array( 'category_parent_id' => array('name' => 'category_parent_id', 'type' => 'xsd:int'), 'category_child_id' => array('name' => 'category_child_id', 'type' => 'xsd:int'), 'category_list' => array('name' => 'category_list', 'type' => 'xsd:int') ) ); The above complex type is a row in a table in my database. Now my function must send an array of these rows so how do I achieve the same My code is as follows: require_once('./nusoap/nusoap.php'); $server = new soap_server; $server-configureWSDL('productwsdl', 'urn:productwsdl'); // Register the data structures used by the service $server-wsdl-addComplexType( 'Category', 'complexType', 'struct', 'all', '', array( 'category_parent_id' = array('name' = 'category_parent_id', 'type' = 'xsd:int'), 'category_child_id' = array('name' = 'category_child_id', 'type' = 'xsd:int'), 'category_list' = array('name' = 'category_list', 'type' = 'xsd:int') ) ); $server-register('getaproduct', // method name array(), // input parameters //array('return' = array('result' = 'tns:Category')), // output parameters array('return' = 'tns:Category'), // output parameters 'urn:productwsdl', // namespace 'urn:productwsdl#getaproduct', // soapaction 'rpc', // style 'encoded', // use 'Get the product categories' // documentation ); function getaproduct() { $conn = mysql_connect('localhost','root',''); mysql_select_db('sssl', $conn); $sql = "SELECT * FROM jos_vm_category_xref"; $q = mysql_query($sql); while($r = mysql_fetch_array($q)) { $items[] = array('category_parent_id'=$r['category_parent_id'], 'category_child_id'=$r['category_child_id'], 'category_list'=$r['category_list']); } return $items; } // Use the request to (try to) invoke the service $HTTP_RAW_POST_DATA = isset($HTTP_RAW_POST_DATA) ? $HTTP_RAW_POST_DATA : ''; $server-service($HTTP_RAW_POST_DATA);

    Read the article

  • Solving problems involving more complex data structures with CUDA

    - by Nils
    So I read a bit about CUDA and GPU programming. I noticed a few things such that access to global memory is slow (therefore shared memory should be used) and that the execution path of threads in a warp should not diverge. I also looked at the (dense) matrix multiplication example, described in the programmers manual and the nbody problem. And the trick with the implementation seems to be the same: Arrange the calculation in a grid (which it already is in case of the matrix mul); then subdivide the grid into smaller tiles; fetch the tiles into shared memory and let the threads calculate as long as possible, until it needs to reload data from the global memory into shared memory. In case of the nbody problem the calculation for each body-body interaction is exactly the same (page 682): bodyBodyInteraction(float4 bi, float4 bj, float3 ai) It takes two bodies and an acceleration vectors. The body vector has four components it's position and the weight. When reading the paper, the calculation is understood easily. But what is if we have a more complex object, with a dynamic data structure? For now just assume that we have an object (similar to the body object presented in the paper) which has a list of other objects attached and the number of objects attached is different in each thread. How could I implement that without having the execution paths of the threads to diverge? I'm also looking for literature which explains how different algorithms involving more complex data structures can be effectively implemented in CUDA.

    Read the article

  • Saving complex aggregates using Repository Pattern

    - by Kevin Lawrence
    We have a complex aggregate (sensitive names obfuscated for confidentiality reasons). The root, R, is composed of collections of Ms, As, Cs, Ss. Ms have collections of other low-level details. etc etc R really is an aggregate (no fair suggesting we split it!) We use lazy loading to retrieve the details. No problem there. But we are struggling a little with how to save such a complex aggregate. From the caller's point of view: r = repository.find(id); r.Ps.add(factory.createP()); r.Cs[5].updateX(123); r.Ms.removeAt(5); repository.save(r); Our competing solutions are: Dirty flags Each entity in the aggregate in the aggregate has a dirty flag. The save() method in the repository walks the tree looking for dirty objects and saves them. Deletes and adds are a little trickier - especially with lazy-loading - but doable. Event listener accumulates changes. Repository subscribes a listener to changes and accumulates events. When save is called, the repository grabs all the change events and writes them to the DB. Give up on repository pattern. Implement overloaded save methods to save the parts of the aggregate separately. The original example would become: r = repository.find(id); r.Ps.add(factory.createP()); r.Cs[5].updateX(123); r.Ms.removeAt(5); repository.save(r.Ps); repository.save(r.Cs); repository.save(r.Ms); (or worse) Advice please! What should we do?

    Read the article

  • Deserialize complex JSON (VB.NET)

    - by Ssstefan
    I'm trying to deserialize json returned by some directions API similar to Google Maps API. My JSON is as follows (I'm using VB.NET 2008): jsontext = { "version":0.3, "status":0, "route_summary": { "total_distance":300, "total_time":14, "start_point":"43", "end_point":"42" }, "route_geometry":[[51.025421,18.647631],[51.026131,18.6471],[51.027802,18.645639]], "route_instructions": [["Head northwest on 43",88,0,4,"88 m","NW",334.8],["Continue on 42",212,1,10,"0.2 km","NW",331.1,"C",356.3]] } So far I came up with the following code: Dim js As New System.Web.Script.Serialization.JavaScriptSerializer Dim lstTextAreas As Output_CloudMade() = js.Deserialize(Of Output_CloudMade())(jsontext) I'm not sure how to define complex class, i.e. Output_CloudMade. I'm trying something like: Public Class RouteSummary Private mTotalDist As Long Private mTotalTime As Long Private mStartPoint As String Private mEndPoint As String Public Property TotalDist() As Long Get Return mTotalDist End Get Set(ByVal value As Long) mTotalDist = value End Set End Property Public Property TotalTime() As Long Get Return mTotalTime End Get Set(ByVal value As Long) mTotalTime = value End Set End Property Public Property StartPoint() As String Get Return mStartPoint End Get Set(ByVal value As String) mStartPoint = value End Set End Property Public Property EndPoint() As String Get Return mEndPoint End Get Set(ByVal value As String) mEndPoint = value End Set End Property End Class Public Class Output_CloudMade Private mVersion As Double Private mStatus As Long Private mRSummary As RouteSummary 'Private mRGeometry As RouteGeometry 'Private mRInstructions As RouteInstructions Public Property Version() As Double Get Return mVersion End Get Set(ByVal value As Double) mVersion = value End Set End Property Public Property Status() As Long Get Return mStatus End Get Set(ByVal value As Long) mStatus = value End Set End Property Public Property Summary() As RouteSummary Get Return mRSummary End Get Set(ByVal value As RouteSummary) mRSummary = value End Set End Property 'Public Property Geometry() As String ' Get ' End Get ' Set(ByVal value As String) ' End Set 'End Property 'Public Property Instructions() As String ' Get ' End Get ' Set(ByVal value As String) ' End Set 'End Property End Class but it does not work. The problem is with complex properties, like route_summary. It is filled with "nothing". Other properties, like "status" or "version" are filled properly. Any ideas, how to define class for the above JSON? Can you share some working code for deserializing JSON in VB.NET? Thanks,

    Read the article

  • Componentizing complex functionality in an MVC web app

    - by NXT
    Hi Everyone, This is question about MVC web-app architecture, and how it can be extended to handle componentizing moderately complex units of functionality. I have an MVC style web-app with a customer facing credit card charge page. I've been asked to allow the admins to enter credit card payments as well, for times when credit cards are taken over the phone. The customer facing credit card charge section of the website is currently it's own controller, with approximately 3 pages and a login. That controller is responsible for: Customer login credential authentication Credit card data collection Calling a library to do the actual charge. reporting the results to the user. I would like to extract the card data collection pages into a component of some kind so that I can easily reuse the code on the admin side of the app. Right now my components are limited to single "view" pages with PHP style embedded Perl code. This is a simple, custom MVC framework written in Perl. Right now, controllers are called directly from the framework to service web requests. My idea is to allow controllers to be called from other controllers, so that I can componentize more complex functionality. For simplicity I think I prefer composition over inheritance, even though it will require writing a bunch of pass-through methods (actions). Being Perl, I could in theory do multiple inheritance. I'm wondering if anyone with experience in other MVC web frameworks can comment on how this sort of thing is usually done. Thank you.

    Read the article

  • How do you unit-test a method with complex input-output

    - by Dan
    When you have a simple method, like for example sum(int x, int y), it is easy to write unit tests. You can check that method will sum correctly two sample integers, for example 2 + 3 should return 5, then you will check the same for some "extraordinary" numbers, for example negative values and zero. Each of these should be separate unit test, as a single unit test should contain single assert. What do you do when you have a complex input-output? Take a Xml parser for example. You can have a single method parse(String xml) that receives the String and returns a Dom object. You can write separate tests that will check that certain text node is parsed correctly, that attributes are parsed OK, that child node belongs to parent etc. For all these I can write a simple input, for example <root><child/></root> that will be used to check parent-child relationships between nodes and so on for the rest of expectations. Now, take a look at follwing Xml: <root> <child1 attribute11="attribute 11 value" attribute12="attribute 12 value">Text 1</child1> <child2 attribute21="attribute 21 value" attribute22="attribute 22 value">Text 2</child2> </root> In order to check that method worked correctly, I need to check many complex conditions, like that attribute11 and attribute12 belong to element1, that Text 1 belongs to child1 etc. I do not want to put more than one assert in my unit-test. How can I accomplish that?

    Read the article

  • XmlSerializer construction with same named extra types

    - by NoizWaves
    Hey, I am hitting trouble constructing an XmlSerializer where the extra types contains types with the same Name (but unique Fullname). Below is an example that illustrated my scenario. Type definitions in external assembly I cannot manipulate: public static class Wheel { public enum Status { Stopped, Spinning } } public static class Engine { public enum Status { Idle, Full } } Class I have written and have control over: public class Car { public Wheel.Status WheelStatus; public Engine.Status EngineStatus; public static string Serialize(Car car) { var xs = new XmlSerializer(typeof(Car), new[] {typeof(Wheel.Status),typeof(Engine.Status)}); var output = new StringBuilder(); using (var sw = new StringWriter(output)) xs.Serialize(sw, car); return output.ToString(); } } The XmlSerializer constructor throws a System.InvalidOperationException with Message "There was an error reflecting type 'Engine.Status'" This exception has an InnerException of type System.InvalidOperationException and with Message "Types 'Wheel.Status' and 'Engine.Status' both use the XML type name, 'Status', from namespace ''. Use XML attributes to specify a unique XML name and/or namespace for the type." Given that I am unable to alter the enum types, how can I construct an XmlSerializer that will serialize Car successfully?

    Read the article

  • Boost::Serialization Mpi Sending array of user defined types

    - by Noman Javed
    I want to send my Array class using boost Mpi template class Array { private: int size; T* data; public: // constructors + other stuff }; Here T can be any built in type or user defined type. Suppose I have a class complex struct complex { std::vector real_imag; // contain two elements }; So the question is how can I send Array using Boost::Mpi + serialization. Thanks in anticipation Regards Noman

    Read the article

  • Filtering data in LINQ with the help of where clause

    - by vik20000in
     LINQ has bought with itself a super power of querying Objects, Database, XML, SharePoint and nearly any other data structure. The power of LINQ lies in the fact that it is managed code that lets you write SQL type code to fetch data.  Whenever working with data we always need a way to filter out the data based on different condition. In this post we will look at some of the different ways in which we can filter data in LINQ with the help of where clause. Simple Filter for an array. Let’s say we have an array of number and we want to filter out data based on some condition. Below is an example int[] numbers = { 5, 4, 1, 3, 9, 8, 6, 7, 2, 0 }; var lowNums =                 from num in numbers                 where num < 5                 select num;   Filter based on one of the property in the class. With the help of LINQ we can also filer out data from a list based on value of some property. var soldOutProducts =                 from prod in products                 where prod.UnitsInStock == 0                 select prod; Filter based on Multiple of the property in the class. var expensiveInStockProducts =         from prod in products         where prod.UnitsInStock > 0 && prod.UnitPrice > 3.00M         select prod; Filter based on the index of the Item in the list.In the below example we can see that we are able to filter data based on the index of the item in the list. string[] digits = { "zero", "one", "two", "three", "four", "five", "six"}; var shortDigits = digits.Where((digit, index) => digit.Length < index); There are many other way in which we can filter out data in LINQ. In the above post I have tried and shown few ways using the LINQ. Vikram

    Read the article

  • RPi and Java Embedded GPIO: Big Data and Java Technology

    - by hinkmond
    Java Embedded and Big Data go hand-in-hand, especially as demonstrated by prototyping on a Raspberry Pi to show how well the Java Embedded platform can perform on a small embedded device which then becomes the proof-of-concept for industrial controllers, medical equipment, networking gear or any type of sensor-connected device generating large amounts of data. The key is a fast and reliable way to access that data using Java technology. In the previous blog posts you've seen the integration of a static electricity sensor and the Raspberry Pi through the GPIO port, then accessing that data through Java Embedded code. It's important to point out how this works and why it works well with Java code. First, the version of Linux (Debian Wheezy/Raspian) that is found on the RPi has a very convenient way to access the GPIO ports through the use of Linux OS managed file handles. This is key in avoiding terrible and complex coding using register manipulation in C code, or having to program in a less elegant and clumsy procedural scripting language such as python. Instead, using Java Embedded, allows a fast way to access those GPIO ports through those same Linux file handles. Java already has a very easy to program way to access file handles with a high degree of performance that matches direct access of those file handles with the Linux OS. Using the Java API java.io.FileWriter lets us open the same file handles that the Linux OS has for accessing the GPIO ports. Then, by first resetting the ports using the unexport and export file handles, we can initialize them for easy use in a Java app. // Open file handles to GPIO port unexport and export controls FileWriter unexportFile = new FileWriter("/sys/class/gpio/unexport"); FileWriter exportFile = new FileWriter("/sys/class/gpio/export"); ... // Reset the port unexportFile.write(gpioChannel); unexportFile.flush(); // Set the port for use exportFile.write(gpioChannel); exportFile.flush(); Then, another set of file handles can be used by the Java app to control the direction of the GPIO port by writing either "in" or "out" to the direction file handle. // Open file handle to input/output direction control of port FileWriter directionFile = new FileWriter("/sys/class/gpio/gpio" + gpioChannel + "/direction"); // Set port for input directionFile.write("in"); // Or, use "out" for output directionFile.flush(); And, finally, a RandomAccessFile handle can be used with a high degree of performance on par with native C code (only milliseconds to read in data and write out data) with low overhead (unlike python) to manipulate the data going in and out on the GPIO port, while the object-oriented nature of Java programming allows for an easy way to construct complex analytic software around that data access functionality to the external world. RandomAccessFile[] raf = new RandomAccessFile[GpioChannels.length]; ... // Reset file seek pointer to read latest value of GPIO port raf[channum].seek(0); raf[channum].read(inBytes); inLine = new String(inBytes); It's Big Data from sensors and industrial/medical/networking equipment meeting complex analytical software on a small constraint device (like a Linux/ARM RPi) where Java Embedded allows you to shine as an Embedded Device Software Designer. Hinkmond

    Read the article

  • SSIS Debugging Tip: Using Data Viewers

    - by Jim Giercyk
    When you have an SSIS package error, it is often very helpful to see the data records that are causing the problem.  After all, if your input has 50,000 records and 1 of them has corrupt data, it can be a chore.  Your execution results will tell you which column contains the bad data, but not which record…..enter the Data Viewer. In this scenario I have created a truncation error.  The input length of [lastname] is 50, but the output table has a length of 15.  When it runs, at least one of the records causes the package to fail.     Now what?  We can tell from our execution results that there is a problem with [lastname], but we have no idea WHICH record?     Let’s identify the row that is actually causing the problem.  First, we grab the oft’ forgotten Row Count shape from our toolbar and connect it to the error output from our input query.  Remember that in order to intercept errors with the error output, you must redirect them.     The Row Count shape requires 1 integer variable.  For our purposes, we will not reference the variable, but it is still required in order for the package to run.  Typically we would use the variable to hold the number of rows in the table and refer back to it later in our process.  We are simply using the Row Count as a “Dead End” for errors.  I called my variable RowCounter.  To create a variable, with no shapes selected, right-click on the background and choose Variable.     Once we have setup the Row Count shape, we can right-click on the red line (error output) from the query, and select Data Viewers.  In the popup, we click the add button and we will see this:     There are other fancier options we can play with, but for now we just want to view the output in a grid.  WE select Grid, then click OK on all of the popup windows to shut them down.  We should now see a grid with a pair of glasses on the error output line.     So, we are ready to catch the error output in a grid and see that is causing the problem!  This time when we run the package, it does not fail because we directed the error to the Row Count.  We also get a popup window showing the error record in a grid.  If there were multiple errors we would see them all.     Indeed, the [lastname] column is longer than 15 characters.  Notice the last column in the grid, [Error Code – Description].  We knew this was a truncation error before we added the grid, but if you have worked with SSIS for any length of time, you know that some errors are much more obscure.  The description column can be very useful under those circumstances! Data viewers can be used any time we want to see the data that is actually in the pipeline;  they stop the package temporarily until we shut them.  Also remember that the Row Count shape can be used as a “Dead End”.  It is useful during development when we want to see the output from a dataflow, but don’t want to update a table or file with the dataData viewers are an invaluable tool for both development and debugging.  Just remember to REMOVE THEM before putting your package into production

    Read the article

  • Error in data view when connecting to an Oracle DB

    - by Mike Polen
    When using SharePoint Designer I found this link that stepped me through how to get it working: http://spsolution.blogspot.com/2008/12/how-to-insert-data-source-in-sharepoint.html That allowed SharePoint Designer to talk to Oracle, but when I placed a data view on a page it gave me the following error: Error while executing web part: System.Data.OracleClient.OracleException: ORA-00923: FROM keyword not found where expected at System.Data.OracleClient.OracleConnection.CheckError(OciErrorHandle errorHandle, Int32 rc) at System.Data.OracleClient.OracleCommand.Execute(OciStatementHandle statementHandle, CommandBehavior behavior, Boolean needRowid, OciRowidDescriptor& rowidDescriptor, ArrayList& resultParameterOrdinals) at System.Data.OracleClient.OracleCommand.Execute(OciStatementHandle statementHandle, CommandBehavior behavior, ArrayList& resultParameterOrdinals) at System.Data.OracleClient.OracleCommand.ExecuteReader(CommandBehavior behavior) at System.Data.OracleClient.OracleCommand.ExecuteDbDataReader(CommandBehavior behavior) at System.Data.Common.DbCommand.Syst... 09/14/2009 14:40:23.52* w3wp.exe (0x0FA0) 0x1A88 Windows SharePoint Services Web Parts 89a1 Monitorable ... em.Data.IDbCommand.ExecuteReader(CommandBehavior behavior) at System.Data.Common.DbDataAdapter.FillInternal(DataSet dataset, DataTable[] datatables, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, Int32 startRecord, Int32 maxRecords, String srcTable, IDbCommand command, CommandBehavior behavior) at System.Data.Common.DbDataAdapter.Fill(DataSet dataSet, String srcTable) at System.Web.UI.WebControls.SqlDataSourceView.ExecuteSelect(DataSourceSelectArguments arguments) at System.Web.UI.DataSourceView.Select(DataSourceSelectArguments arguments, DataSourceViewSelectCallback callback) at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigatorInternal() ... 09/14/2009 14:40:23.52* w3wp.exe (0x0FA0) 0x1A88 Windows SharePoint Services Web Parts 89a1 Monitorable ... at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigator() at Microsoft.SharePoint.WebControls.SingleDataSource.GetXPathNavigator(IDataSource datasource, Boolean originalData) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.GetXPathNavigator(String viewPath) at Microsoft.SharePoint.WebPartPages.DataFormWebPart.PrepareAndPerformTransform() I am mystified.

    Read the article

  • Is it possible to have a wireless in-house NAS with wireless data transfer rates of equivalent to SATA speeds?

    - by techaddict
    Basically I would like to know, if it is possible to set up an NAS in my house to be accessed wirelessly, that can reach equivalent real-life data transfer speeds to USB 3.0 or an internal SATA hard drive. I have been wanting to do this for some time ( a couple of years now). Basically, this is what I want to do: Plug in a number of hard drives in an array, somewhere in my house, to be left plugged in and never have to be monitored. Ideally several terabytes. Whenever I am home, to have my computer and laptop configured to automatically find the NAS, as easy as plugging in an external hard drive - except completely wirelessly. Data transfer needs to be as seamless and quick as having added another internal hard drive in my laptop. Moreover, data should be able to accessed without having to copy it over - I should be able to wirelessly access the NAS and browse files, and open files directly from the NAS. For example, say I wanted to open a video - I should be able to play the video that is located on the NAS, directly from the NAS, completely wirelessly. If I wanted to open a .pdf file, I should be able to open it and read it directly from the NAS, as if it were located on my physical internal hard drive. Cost is important as well. Please tell me what equipment I need for this to be possible. I know you geniuses out there who can tell me if this is possible.

    Read the article

  • Call Webservice without adding a WebReference - with Complex Types

    - by ck
    I'm using the code at This Site to call a webservice dynamically. [SecurityPermissionAttribute(SecurityAction.Demand, Unrestricted = true)] public static object CallWebService(string webServiceAsmxUrl, string serviceName, string methodName, object[] args) { System.Net.WebClient client = new System.Net.WebClient(); //-Connect To the web service using (System.IO.Stream stream = client.OpenRead(webServiceAsmxUrl + "?wsdl")) { //--Now read the WSDL file describing a service. ServiceDescription description = ServiceDescription.Read(stream); ///// LOAD THE DOM ///////// //--Initialize a service description importer. ServiceDescriptionImporter importer = new ServiceDescriptionImporter(); importer.ProtocolName = "Soap12"; // Use SOAP 1.2. importer.AddServiceDescription(description, null, null); //--Generate a proxy client. importer.Style = ServiceDescriptionImportStyle.Client; //--Generate properties to represent primitive values. importer.CodeGenerationOptions = System.Xml.Serialization.CodeGenerationOptions.GenerateProperties; //--Initialize a Code-DOM tree into which we will import the service. CodeNamespace nmspace = new CodeNamespace(); CodeCompileUnit unit1 = new CodeCompileUnit(); unit1.Namespaces.Add(nmspace); //--Import the service into the Code-DOM tree. This creates proxy code //--that uses the service. ServiceDescriptionImportWarnings warning = importer.Import(nmspace, unit1); if (warning == 0) //--If zero then we are good to go { //--Generate the proxy code CodeDomProvider provider1 = CodeDomProvider.CreateProvider("CSharp"); //--Compile the assembly proxy with the appropriate references string[] assemblyReferences = new string[5] { "System.dll", "System.Web.Services.dll", "System.Web.dll", "System.Xml.dll", "System.Data.dll" }; CompilerParameters parms = new CompilerParameters(assemblyReferences); CompilerResults results = provider1.CompileAssemblyFromDom(parms, unit1); //-Check For Errors if (results.Errors.Count > 0) { StringBuilder sb = new StringBuilder(); foreach (CompilerError oops in results.Errors) { sb.AppendLine("========Compiler error============"); sb.AppendLine(oops.ErrorText); } throw new System.ApplicationException("Compile Error Occured calling webservice. " + sb.ToString()); } //--Finally, Invoke the web service method Type foundType = null; Type[] types = results.CompiledAssembly.GetTypes(); foreach (Type type in types) { if (type.BaseType == typeof(System.Web.Services.Protocols.SoapHttpClientProtocol)) { Console.WriteLine(type.ToString()); foundType = type; } } object wsvcClass = results.CompiledAssembly.CreateInstance(foundType.ToString()); MethodInfo mi = wsvcClass.GetType().GetMethod(methodName); return mi.Invoke(wsvcClass, args); } else { return null; } } } This works fine when I use built in types, but for my own classes, I get this: Event Type: Error Event Source: TDX Queue Service Event Category: None Event ID: 0 Date: 12/04/2010 Time: 12:12:38 User: N/A Computer: TDXRMISDEV01 Description: System.ArgumentException: Object of type 'TDXDataTypes.AgencyOutput' cannot be converted to type 'AgencyOutput'. Server stack trace: at System.RuntimeType.CheckValue(Object value, Binder binder, CultureInfo culture, BindingFlags invokeAttr) at System.Reflection.MethodBase.CheckArguments(Object[] parameters, Binder binder, BindingFlags invokeAttr, CultureInfo culture, Signature sig) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture, Boolean skipVisibilityChecks) at System.Reflection.RuntimeMethodInfo.Invoke(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture) at System.Reflection.MethodBase.Invoke(Object obj, Object[] parameters) at TDXQueueEngine.GenericWebserviceProxy.CallWebService(String webServiceAsmxUrl, String serviceName, String methodName, Object[] args) in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\GenericWebserviceProxy.cs:line 76 at TDXQueueEngine.TDXQueueWebserviceItem.Run() in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\TDXQueueWebserviceItem.cs:line 99 at System.Runtime.Remoting.Messaging.StackBuilderSink._PrivateProcessMessage(IntPtr md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.PrivateProcessMessage(RuntimeMethodHandle md, Object[] args, Object server, Int32 methodPtr, Boolean fExecuteInContext, Object[]& outArgs) at System.Runtime.Remoting.Messaging.StackBuilderSink.AsyncProcessMessage(IMessage msg, IMessageSink replySink) Exception rethrown at [0]: at System.Runtime.Remoting.Proxies.RealProxy.EndInvokeHelper(Message reqMsg, Boolean bProxyCase) at System.Runtime.Remoting.Proxies.RemotingProxy.Invoke(Object NotUsed, MessageData& msgData) at TDXQueueEngine.TDXQueue.RunProcess.EndInvoke(IAsyncResult result) at TDXQueueEngine.TDXQueue.processComplete(IAsyncResult ar) in C:\CkAdmDev\TDXQueueEngine\TDXQueueEngine\TDXQueueEngine\TDXQueue.cs:line 130 For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp. The classes reference the same assembly and the same version. Do I need to include my assembly as a reference when building the temporary assembly? If so, how? Thanks.

    Read the article

  • Calling the same xsl:template for different node names of the same complex type

    - by CraftyFella
    Hi, I'm trying to keep my xsl DRY and as a result I wanted to call the same template for 2 sections of an XML document which happen to be the same complex type (ContactDetails and AltContactDetails). Given the following XML: <?xml version="1.0" encoding="UTF-8"?> <RootNode> <Name>Bob</Name> <ContactDetails> <Address> <Line1>1 High Street</Line1> <Town>TownName</Town> <Postcode>AB1 1CD</Postcode> </Address> <Email>[email protected]</Email> </ContactDetails> <AltContactDetails> <Address> <Line1>3 Market Square</Line1> <Town>TownName</Town> <Postcode>EF2 2GH</Postcode> </Address> <Email>[email protected]</Email> </AltContactDetails> </RootNode> I wrote an XSL Stylesheet as follows: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:template match="/"> <PersonsName> <xsl:value-of select="RootNode/Name"/> </PersonsName> <xsl:call-template name="ContactDetails"> <xsl:with-param name="data"><xsl:value-of select="RootNode/ContactDetails"/></xsl:with-param> <xsl:with-param name="elementName"><xsl:value-of select="'FirstAddress'"/></xsl:with-param> </xsl:call-template> <xsl:call-template name="ContactDetails"> <xsl:with-param name="data"><xsl:value-of select="RootNode/AltContactDetails"/></xsl:with-param> <xsl:with-param name="elementName"><xsl:value-of select="'SecondAddress'"/></xsl:with-param> </xsl:call-template> </xsl:template> <xsl:template name="ContactDetails"> <xsl:param name="data"></xsl:param> <xsl:param name="elementName"></xsl:param> <xsl:element name="{$elementName}"> <FirstLine> <xsl:value-of select="$data/Address/Line1"/> </FirstLine> <Town> <xsl:value-of select="$data/Address/Town"/> </Town> <PostalCode> <xsl:value-of select="$data/Address/Postcode"/> </PostalCode> </xsl:element> </xsl:template> </xsl:stylesheet> When i try to run the style sheet it's complaining to me that I need to: To use a result tree fragment in a path expression, either use exsl:node-set() or specify version 1.1 I don't want to go to version 1.1.. So does anyone know how to get the exsl:node-set() working for the above example? Or if someone knows of a better way to apply the same template to 2 different sections then that would also really help me out? Thanks Dave

    Read the article

  • SQLAuthority News – List of Master Data Services White Paper

    - by pinaldave
    Since my TechEd India 2010 presentation I am very excited with SQL Server 2010 MDS. I just come across very interesting white paper on Microsoft site related to this subject. Here is the list of the same and location where you can download them. They are all written by Top Experts at Microsoft. Master Data Management from a Business Perspective - Download a PDF version or an XPS version Master Data Management from a Technical Perspective - Download a PDF version or an XPS version Bringing Master Data Management to the Stakeholders - Download a PDF version or an XPS version Implementing a Phased Approach to Master Data Management - Download a PDF version or an XPS version SharePoint Workflow Integration with Master Data Services - Read it here. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL

    Read the article

  • Offloading (Some) EBS 12 Reporting to Active Data Guard Instances

    - by Steven Chan
    For most Oracle Database users, Oracle Active Data Guard allows users to:Create a physical standby database for business continuity and disaster recoveryOffload reporting from the production database to the read-only physical standby databaseE-Business Suite customers have been able to use Active Data Guard to create physical standby databases for their EBS environments since the feature was introduced with the 11g Database.  EBS sysadmins can use the generic Active Data Guard documentation to take advantage of the Active Data Guard standby database capabilities.  I am pleased to announce that it is now possible to offload a subset of some ReportWriter-based reports -- but not all -- from a production EBS environment to an Active Data Guard physical standby database.  But before I go into the details of this newly-certified configuration, it's necessary to understand some details about what happens whenever someone attempts to access the E-Business Suite.

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2

    - by rajbk
    In the previous post, you saw how to create an OData feed and pre-filter the data. In this post, we will see how to shape the data. A sample project is attached at the bottom of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1 Shaping the feed The Product feed we created earlier returns too much information about our products. Let’s change this so that only the following properties are returned – ProductID, ProductName, QuantityPerUnit, UnitPrice, UnitsInStock. We also want to return only Products that are not discontinued.  Splitting the Entity To shape our data according to the requirements above, we are going to split our Product Entity into two and expose one through the feed. The exposed entity will contain only the properties listed above. We will use the other Entity in our Query Interceptor to pre-filter the data so that discontinued products are not returned. Go to the design surface for the Entity Model and make a copy of the Product entity. A “Product1” Entity gets created.   Rename Product1 to ProductDetail. Right click on the Product entity and select “Add Association” Make a one to one association between Product and ProductDetails.   Keep only the properties we wish to expose on the Product entity and delete all other properties on it (see diagram below). You delete a property on an Entity by right clicking on the property and selecting “delete”. Keep the ProductID on the ProductDetail. Delete any other property on the ProductDetail entity that is already present in the Product entity. Your design surface should look like below:    Mapping Entity to Database Tables Right click on “ProductDetail” and go to “Table Mapping”   Add a mapping to the “Products” table in the Mapping Details.   After mapping ProductDetail, you should see the following.   Add a referential constraint. Lets add a referential constraint which is similar to a referential integrity constraint in SQL. Double click on the Association between the Entities and add the constraint with “Principal” set to “Product”. Let us review what we did so far. We made a copy of the Product entity and called it ProductDetail We created a one to one association between these entities Excluding the ProductID, we made sure properties were not duplicated between these entities  We added a ProductDetail entity to Products table mapping (Entity to Database). We added a referential constraint between the entities. Lets build our project. We get the following error: ”'NortwindODataFeed.Product' does not contain a definition for 'Discontinued' and no extension method 'Discontinued' accepting a first argument of type 'NortwindODataFeed.Product' could be found …" The reason for this error is because our Product Entity no longer has a “Discontinued” property. We “moved” it to the ProductDetail entity since we want our Product Entity to contain only properties that will be exposed by our feed. Since we have a one to one association between the entities, we can easily rewrite our Query Interceptor like so: [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.ProductDetail.Discontinued == false; } Similarly, all “hidden” properties of the Product table are available to us internally (through the ProductDetail Entity) for any additional logic we wish to implement. Compile the project and view the feed. We see that the feed returns only the properties that were part of the requirement.   To see the data in JSON format, you have to create a request with the following request header Accept: application/json, text/javascript, */* (easy to do in jQuery) The result should look like this: { "d" : { "results": [ { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(1)", "type": "NorthwindModel.Product" }, "ProductID": 1, "ProductName": "Chai", "QuantityPerUnit": "10 boxes x 20 bags", "UnitPrice": "18.0000", "UnitsInStock": 39 }, { "__metadata": { "uri": "http://localhost.:2576/DataService.svc/Products(2)", "type": "NorthwindModel.Product" }, "ProductID": 2, "ProductName": "Chang", "QuantityPerUnit": "24 - 12 oz bottles", "UnitPrice": "19.0000", "UnitsInStock": 17 }, { ... ... If anyone has the $format operation working, please post a comment. It was not working for me at the time of writing this.  We have successfully pre-filtered our data to expose only products that have not been discontinued and shaped our data so that only certain properties of the Entity are exposed. Note that there are several other ways you could implement this like creating a QueryView, Stored Procedure or DefiningQuery. You have seen how easy it is to create an OData feed, shape the data and pre-filter it by hardly writing any code of your own. For more details on OData, Google it with your favorite search engine :-) Also check out the one of the most passionate persons I have ever met, Pablo Castro – the Architect of Aristoria WCF Data Services. Watch his MIX 2010 presentation titled “OData: There's a Feed for That” here. Download Sample Project for VS 2010 RTM NortwindODataFeed.zip

    Read the article

  • SQL Server DATA Tools CTP4 Released!

    - by hassanfadili
    SQL Server team has released the new SQL Server Data Tools CTP4. Congratulations and Thanks to Gert Drapers and his team with this great milestone. To lear more about this SSDT CTP4 Release, check: What’s new in SQL Server Data Tools CTP4?http://blogs.msdn.com/b/ssdt/archive/2011/11/21/what-s-new-in-sql-server-data-tools-ctp4.aspxSQL Server Data Tools CTP4 vs. VS2010 Database Projectshttp://blogs.msdn.com/b/ssdt/archive/2011/11/21/sql-server-data-tools-ctp4-vs-vs2010-database-projects.aspxTop VSDB->SSDT Project Conversion Issueshttp://blogs.msdn.com/b/ssdt/archive/2011/11/21/top-vsdb-gt-ssdt-project-conversion issues.aspxUninstalling SQL Server Developer Tools CTP3 (Code-named “Juneau”) http://blogs.msdn.com/b/ssdt/archive/2011/11/21/uninstalling-ssdt-ctp3-code-named-juneau.aspxThis actually points to a nifty PowerShell script to help you uninstall.Have Fun.v

    Read the article

  • ODI 11g - Oracle Data Integrator 11g – A Hands-On Tutorial

    - by David Allan
    I've have been asked by Packt publishing to review a brand new book on Oracle Data Integrator: Getting Started with Oracle Data Integrator 11g – A Hands-On Tutorial. Waiting on this book to arrive and see what goodies are inside, I'll blog a review later. The book can be found at Oracle Data Integrator 11g – A Hands-On Tutorial Looking at the table of contents, it looks like it gives a good broad introduction (including various data formats) to the product; Chapter 1: Product Overview Chapter 2: Product Installation Chapter 3: Using Variables Chapter 4: ODI Sources, Targets, and Knowledge Modules Chapter 5: Working with Databases Chapter 6: Working with MySQL Chapter 7: Working with Microsoft SQL Server Chapter 8: Integrating File Data Chapter 9: Working with XML Files Chapter 10: Creating Workflows—Packages and Load Plans Chapter 11: Error Management Chapter 12: Managing and Monitoring ODI Components Chapter 13: Concluding Remarks Looking forward to it.

    Read the article

  • SSIS Snack: Data Flow Source Adapters

    - by andyleonard
    Introduction Configuring a Source Adapter in a Data Flow Task couples (binds) the Data Flow to an external schema. This has implications for dynamic data loads. "Why Can't I...?" I'm often asked a question similar to the following: "I have 17 flat files with different schemas that I want to load to the same destination database - how many Data Flow Tasks do I need?" I reply "17 different schemas? That's easy, you need 17 Data Flow Tasks." In his book Microsoft SQL Server 2005 Integration Services...(read more)

    Read the article

  • Talend Enterprise Data Integration overperforms on Oracle SPARC T4

    - by Amir Javanshir
    The SPARC T microprocessor, released in 2005 by Sun Microsystems, and now continued at Oracle, has a good track record in parallel execution and multi-threaded performance. However it was less suited for pure single-threaded workloads. The new SPARC T4 processor is now filling that gap by offering a 5x better single-thread performance over previous generations. Following our long-term relationship with Talend, a fast growing ISV positioned by Gartner in the “Visionaries” quadrant of the “Magic Quadrant for Data Integration Tools”, we decided to test some of their integration components with the T4 chip, more precisely on a T4-1 system, in order to verify first hand if this new processor stands up to its promises. Several tests were performed, mainly focused on: Single-thread performance of the new SPARC T4 processor compared to an older SPARC T2+ processor Overall throughput of the SPARC T4-1 server using multiple threads The tests consisted in reading large amounts of data --ten's of gigabytes--, processing and writing them back to a file or an Oracle 11gR2 database table. They are CPU, memory and IO bound tests. Given the main focus of this project --CPU performance--, bottlenecks were removed as much as possible on the memory and IO sub-systems. When possible, the data to process was put into the ZFS filesystem cache, for instance. Also, two external storage devices were directly attached to the servers under test, each one divided in two ZFS pools for read and write operations. Multi-thread: Testing throughput on the Oracle T4-1 The tests were performed with different number of simultaneous threads (1, 2, 4, 8, 12, 16, 32, 48 and 64) and using different storage devices: Flash, Fibre Channel storage, two stripped internal disks and one single internal disk. All storage devices used ZFS as filesystem and volume management. Each thread read a dedicated 1GB-large file containing 12.5M lines with the following structure: customerID;FirstName;LastName;StreetAddress;City;State;Zip;Cust_Status;Since_DT;Status_DT 1;Ronald;Reagan;South Highway;Santa Fe;Montana;98756;A;04-06-2006;09-08-2008 2;Theodore;Roosevelt;Timberlane Drive;Columbus;Louisiana;75677;A;10-05-2009;27-05-2008 3;Andrew;Madison;S Rustle St;Santa Fe;Arkansas;75677;A;29-04-2005;09-02-2008 4;Dwight;Adams;South Roosevelt Drive;Baton Rouge;Vermont;75677;A;15-02-2004;26-01-2007 […] The following graphs present the results of our tests: Unsurprisingly up to 16 threads, all files fit in the ZFS cache a.k.a L2ARC : once the cache is hot there is no performance difference depending on the underlying storage. From 16 threads upwards however, it is clear that IO becomes a bottleneck, having a good IO subsystem is thus key. Single-disk performance collapses whereas the Sun F5100 and ST6180 arrays allow the T4-1 to scale quite seamlessly. From 32 to 64 threads, the performance is almost constant with just a slow decline. For the database load tests, only the best IO configuration --using external storage devices-- were used, hosting the Oracle table spaces and redo log files. Using the Sun Storage F5100 array allows the T4-1 server to scale up to 48 parallel JVM processes before saturating the CPU. The final result is a staggering 646K lines per second insertion in an Oracle table using 48 parallel threads. Single-thread: Testing the single thread performance Seven different tests were performed on both servers. Given the fact that only one thread, thus one file was read, no IO bottleneck was involved, all data being served from the ZFS cache. Read File ? Filter ? Write File: Read file, filter data, write the filtered data in a new file. The filter is set on the “Status” column: only lines with status set to “A” are selected. This limits each output file to about 500 MB. Read File ? Load Database Table: Read file, insert into a single Oracle table. Average: Read file, compute the average of a numeric column, write the result in a new file. Division & Square Root: Read file, perform a division and square root on a numeric column, write the result data in a new file. Oracle DB Dump: Dump the content of an Oracle table (12.5M rows) into a CSV file. Transform: Read file, transform, write the result data in a new file. The transformations applied are: set the address column to upper case and add an extra column at the end, which is the concatenation of two columns. Sort: Read file, sort a numeric and alpha numeric column, write the result data in a new file. The following table and graph present the final results of the tests: Throughput unit is thousand lines per second processed (K lines/second). Improvement is the % of improvement between the T5140 and T4-1. Test T4-1 (Time s.) T5140 (Time s.) Improvement T4-1 (Throughput) T5140 (Throughput) Read/Filter/Write 125 806 645% 100 16 Read/Load Database 195 1111 570% 64 11 Average 96 557 580% 130 22 Division & Square Root 161 1054 655% 78 12 Oracle DB Dump 164 945 576% 76 13 Transform 159 1124 707% 79 11 Sort 251 1336 532% 50 9 The improvement of single-thread performance is quite dramatic: depending on the tests, the T4 is between 5.4 to 7 times faster than the T2+. It seems clear that the SPARC T4 processor has gone a long way filling the gap in single-thread performance, without sacrifying the multi-threaded capability as it still shows a very impressive scaling on heavy-duty multi-threaded jobs. Finally, as always at Oracle ISV Engineering, we are happy to help our ISV partners test their own applications on our platforms, so don't hesitate to contact us and let's see what the SPARC T4-based systems can do for your application! "As describe in this benchmark, Talend Enterprise Data Integration has overperformed on T4. I was generally happy to see that the T4 gave scaling opportunities for many scenarios like complex aggregations. Row by row insertion in Oracle DB is faster with more than 650,000 rows per seconds without using any bulk Oracle capabilities !" Cedric Carbone, Talend CTO.

    Read the article

  • Oracle Database 11g Helps Control Exponential Data Growth

    - by [email protected]
    The 2010 ESG annual customer survey is now available. As part of it, ESG interviewed 300 customers about their IT priorities and, unsurprisingly, "Manage Data Growth" is top of the list. Perhaps less self-evident is the proposed solution to target this prime concern: "Often overlooked because it is a database platform, Oracle Database 11g offers additional capabilities such as automatic storage management (ASM), advanced data compression, and data protection that make managing data growth much easier for organizations of any size." The paper goes on to discuss these capabilities and highlights their potential benefits. Oracle Database 11g Helps Control Exponential Database Growth - a worthwhile read for anyone having to deal with rapidly increasing amounts of data. Download your free copy here.

    Read the article

< Previous Page | 88 89 90 91 92 93 94 95 96 97 98 99  | Next Page >