Search Results

Search found 4772 results on 191 pages for 'complex'.

Page 162/191 | < Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >

  • Best Practices For Secure APIs?

    - by Ferrett Steinmetz
    Let's say I have a website that has a lot of information on our products. I'd like some of our customers (including us!) to be able to look up our products for various methods, including: 1) Pulling data from AJAX calls that return data in cool, JavaScripty-ways 2) Creating iPhone applications that use that data; 3) Having other web applications use that data for their own end. Normally, I'd just create an API and be done with it. However, this data is in fact mildly confidential - which is to say that we don't want our competitors to be able to look up all our products every morning and then automatically set their prices to undercut us. And we also want to be able to look at who might be abusing the system, so if someone's making ten million complex calls to our API a day and bogging down our server, we can cut them off. My next logical step would be then to create a developers' key to restrict access - which would work fine for web apps, but not so much for any AJAX calls. (As I see it, they'd need to provide the key in the JavaScript, which is in plaintext and easily seen, and hence there's actually no security at all. Particularly if we'd be using our own developers' keys on our site to make these AJAX calls.) So my question: after looking around at Oauth and OpenID for some time, I'm not sure there is a solution that would handle all three of the above. Is there some sort of canonical "best practices" for developers' keys, or can Oauth and OpenID handle AJAX calls easily in some fashion I have yet to grok, or am I missing something entirely?

    Read the article

  • Is the Subversion 'stack' a realistic alternative to Team Foundation Server?

    - by Robert S.
    I'm evaluating Microsoft Team Foundation Server for my customer, who currently uses Visual SourceSafe and nothing else. They have explicitly expressed a desire to implement a more rigid and process-driven environment as their application is in production and they have future releases to consider. The particular areas I'm trying to cover are: Configuration management (e.g., source control) Change management (workflow and doco for change requests, tasks) Release management (builds and deployments) Incident and problem management (issues and bugs) Document management (similar to source control, but available via web) Code analysis constraints on check-ins A testing framework Reporting Visual Studio 2008 integration TFS does all of these things quite well, but it's expensive and complex to maintain, and the inexpensive Workgroup edition doesn't scale. We don't get TFS as part of our MSDN subscription. Those problems can be overcome, but before I tell my customer to go the TFS route, which in itself isn't a terrible thing, I wanted to evaluate the alternatives. I know Subversion is often suggested for its configuration management/source control, but what about the other areas? Would a combination of Subversion/NUnit/Wiki/CruiseControl/NAnt/something else satisfy all of these requirements? What tools do I need to include in my evaluation? Or should I just bite the bullet and go with TFS since we're already invested in the Microsoft stack?

    Read the article

  • is there a Java GUI for dummies ?

    - by MHero
    Hello, first I want to apologize about my english, it may be not clear enough. Well, I'm new to Java programming, and I search over this web about use of Swing, AWT, 2D, etc. but I didn't get the answer I was looking for. I want to know about a book or method to learn Java GUI programming (not even sure this is a propper term). Previous answers guide me to Filthy Rich Clients by Romain Guy and also to The Swing Tutorial in Sun Web Page. and No offense, but...the first one seems too complex and the second one a bit disorganize. so I ask about a more "for dummies" method. Thanks EDIT: Thanks everyboy, you're very kind and serious. I want to clear some things that I didn't state for being my first question. I dont' want to use autogenerated code(Don't want to say why only focus on my question for consistency) Also I've read Deitel & Deitel and it's a good beginners book but it seems to me that doesn't cover layout(and other details) Finally, I tried to read netbeans generated code but it's a mess find method by method and function by function the way that the IDE does it I hope this edition helps to solve my question

    Read the article

  • Is there a case for parameterising using Abstract classes rather than Interfaces?

    - by Chris
    I'm currently developing a component based API that is heavily stateful. The top level components implement around a dozen interfaces each. The stock top-level components therefore sit ontop of a stack of Abstract implementations which in turn contain multiple mixin implementations and implement multiple mixin interfaces. So far, so good (I hope). The problem is that the base functionality is extremely complex to implement (1,000s of lines in 5 layers of base classes) and therefore I do not wish for component writers to implement the interfaces themselves but rather to extend my base classes (where all the boiler plate code is already written). If the API therefore accepts interfaces rather than references to the Abstract implementation that I wish for component writers to extends, then I have a risk that the implementer will not perform the validation that is both required and assumed by other areas of code. Therefore, my question is, is it sometimes valid to paramerise API methods using an abstract implementation reference rather than a reference to the interface(s) that it implements? Do you have an example of a well-designed API that uses this technique or am I trying to talk myself into bad-practice?

    Read the article

  • Why does reusing arrays increase performance so significantly in c#?

    - by Willem
    In my code, I perform a large number of tasks, each requiring a large array of memory to temporarily store data. I have about 500 tasks. At the beginning of each task, I allocate memory for an array : double[] tempDoubleArray = new double[M]; M is a large number depending on the precise task, typically around 2000000. Now, I do some complex calculations to fill the array, and in the end I use the array to determine the result of this task. After that, the tempDoubleArray goes out of scope. Profiling reveals that the calls to construct the arrays are time consuming. So, I decide to try and reuse the array, by making it static and reusing it. It requires some additional juggling to figure out the minimum size of the array, requiring an extra pass through all tasks, but it works. Now, the program is much faster (from 80 sec to 22 sec for execution of all tasks). double[] tempDoubleArray = staticDoubleArray; However, I'm a bit in the dark of why precisely this works so well. Id say that in the original code, when the tempDoubleArray goes out of scope, it can be collected, so allocating a new array should not be that hard right? I ask this because understanding why it works might help me figuring out other ways to achieve the same effect, and because I would like to know in what cases allocation gives performance issues.

    Read the article

  • c# creating a database query METHOD

    - by Sinaesthetic
    I'm not sure if im delluded but what I would like to do is create a method that will return the results of a query, so that i can reuse the connection code. As i understand it, a query returns an object but how do i pass that object back? I want to send the query into the method as a string argument, and have it return the results so that I can use them. Here's what i have which was a stab in the dark, it obviously doesn't work. This example is me trying to populate a listbox with the results of a query; the sheet name is Employees and the field/column is name. The error i get is "Complex DataBinding accepts as a data source either an IList or an IListSource.". any ideas? public Form1() { InitializeComponent(); openFileDialog1.ShowDialog(); openedFile = openFileDialog1.FileName; lbxEmployeeNames.DataSource = Query("Select [name] FROM [Employees$]"); } public object Query(string sql) { System.Data.OleDb.OleDbConnection MyConnection; System.Data.OleDb.OleDbCommand myCommand = new System.Data.OleDb.OleDbCommand(); string connectionPath; //build connection string connectionPath = "provider=Microsoft.Jet.OLEDB.4.0;Data Source='" + openedFile + "';Extended Properties=Excel 8.0;"; MyConnection = new System.Data.OleDb.OleDbConnection(connectionPath); MyConnection.Open(); myCommand.Connection = MyConnection; myCommand.CommandText = sql; return myCommand.ExecuteNonQuery(); }

    Read the article

  • Asynchronous SQL Operations

    - by Paul Hatcherian
    I've got a problem I'm not sure how best to solve. I have an application which updates a database in response to ad hoc requests. One request in particular is quite common. The request is an update that by itself is quite simple, but has some complex preconditions. For this request the business layer first requests a set of data from the data layer. The business logic layer evaluated the data from the database and parameters from the request, from this the action to be performed is determined, and the request's response message(s) are created. The business layer now executes the actual update command that is the purpose of the request. This last step is the problem, this command is dependent on the state of the database, which might have changed since the business logic ran. Locking down the data read in this operation across several round-trips to the database doesn't seem like a good idea either. Is there a 'best-practice' way to accomplish something like this? Thanks!

    Read the article

  • Running an application from an USB device...

    - by Workshop Alex
    I'm working on a proof-of-concept application, containing a WCF service with console host and client, both on a single USB device. On the same device I will also have the client application which will connect to this service. The service uses the entity framework to connect to the database, which in this POC will just return a list of names. If it works, it will be used for a larger project. Creating the client and service was easy and this works well from USB. But getting the service to connect to the database isn't. I've found this site, suggesting that I should modify machine.config but that stops the XCopy deployment. This project cannot change any setting of the PC, so this suggestion is bad. I cannot create a deployment setup either. The whole thing just needs to run from USB disk. So, how do I get it to run? (The service just selects a list of names from the database, which it returns to the client. If this POC works, it will do far more complex things!)

    Read the article

  • C++ AMP, for loops to parallel_for_each loop

    - by user1430335
    I'm converting an algorithm to make use of the massive acceleration that C++ AMP provides. The stage I'm at is putting the for loops into the known parallel_for_each loop. Normally this should be a straightforward task to do but it appears more complex then I first thought. It's a nested loop which I increment using steps of 4 per iterations: for(int j = 0; j < height; j += 4, data += width * 4 * 4) { for(int i = 0; i < width; i += 4) { The trouble I'm having is the use of the index. I can't seem to find a way to properly fit this into the parallel_for_each loop. Using an index of rank 2 is the way to go but manipulating it via branching will do harm to the performance gain. I found a similar post: Controlling the index variables in C++ AMP. It also deals about index manipulation but the increment aspect doesn't cover my issue. With kind regards, Forcecast

    Read the article

  • How can I get the search parameters from jqgrid in the server side?

    - by Jack
    I've been visiting this forum a lot without registering since several months ago, and I really like it. So, thanks in advance to all the members. Now I'd like to make my first question. I've been using Jqgrid for a few time, and I've managed to have it display the rows and buttons, but now I need to do a search, a complex one, and I thought that "automatically" jqgrid would send the parameters to the server, I mean: sField, searchField, sOper, searchOper, sValue, searchString, sFilter and/or filters I'm not sure at all which ones it has to send, and I thought it would be just the same as it sends 'page', 'rows' and 'sord'. But I'm missing something, because, for example, I can get 'page', 'rows' and 'sord' using: $limit = $this->getRequest()->getParam('rows', 10); but I get nothing by using: $params = $_REQUEST['filters'] or $params = $this->getRequest()->getParam('sFilter'); I'm using PHP, Zend and json. I didn't post any code because my doubt is kind of generic, but I will do it if it was needed. I've searched a lot, and read the documentation, but I just don't see it. I will appreciate your help, thanks!

    Read the article

  • How do I get my Windows Forms application to use a custom main function and get access to the Applic

    - by burble
    Hi Folks I am trying to use a Main () function in a class to control the program flow in my vb .net Windows Forms application. I have added a splash screen component and a login screen, and customised my main sdi form. I have set the startup form to be my main function in the Application Page of the Project Designer, and everything seems to work fine(ish). However, I would like to use: Me.MinimumSplashScreenDisplayTime = 5000 to ensure that the splash screen is visible, but it is not recognised by the system unless I tick the Enable Application Framework check box on the Project Designer. If I do this, on startup the program ignores the login and splash screens and all my customisation and just displays a default Form1, even though I have also specified my splash screen in the AF dropdown list. Of course, there are alternative ways to delay a splash screen, such as putting the thread temporarily to sleep (which didn't seem to work), but I suspect that there are other things in the AF that I may want to use. Any suggestions on how I can get round this please, and get a sensible means of controlling program flow? Any thoughts on the best overall structure for organising program flow would also be helpful too. I am concerned both about going down a Microsoft or an alternative custom route that may cause me problems later, as the application becomes more complex. Thankses.

    Read the article

  • Convert asp.net application to windows forms app

    - by rogdawg
    I have written and deployed an ASP.NET application that is pretty complex. It uses XSL transformations to create web forms for a large variety of data objects. The data comes from the database as XML via a web service. Now, I need to create a Windows desktop application that will provide a small subset of the web applications functionality to a user who may not have access to the web (working in remote areas). I will provide the data syncing using the MS Sync Framework. And I will have the desktop use a local data store. I would like to use the same xslt files in the desktop app that I use in the web app for the form creation so that, if changes are made, the desktop app can update itself when it connects and syncs its data. But, I am wondering how to replicate the asp.net codebehind logic of my web app in the windows forms. If I use a browser control to render the XSLTransformation result, then how could I handle click events, etc, in the form? Also, can I launch other windows as "dialog boxes" from my windows forms (I do this in my web app using RadControls functionality)? Thanks for any advice you can give.

    Read the article

  • Should I learn two (or more) programming languages in parallel?

    - by c_maker
    I found entries on this site about learning a new programming language, however, I have not come across anything that talks about the advantages and disadvantages of learning two languages at the same time. Let's say my goal is to learn two new languages in a year. I understand that the definition of learning a new language is different for everyone and you can probably never know everything about a language. I believe in most cases the following things are enough to include the language in your resume and say that you are proficient in it (list is not in any particular order): Know its syntax so you can write a simple program in it Compare its underlying concepts with concepts of other languages Know best practices Know what libraries are available Know in what situations to use it Understand the flow of a more complex program At least know most of what you do not know I would probably look for a good book and pick an open source project for both of these languages to start with. My questions: Is it best to spend 5 months learning language#1 then 5 months learning language#2, or should you mix the two. Mixing them I mean you work on them in parallel. Should you pick two languages that are similar or different? Are there any advantages/disadvantages of let's say learning Lisp in tandem with Ruby? Is it a good idea to pick two languages with similar syntax or would it be too confusing? Please tell me what your experiences are regarding this. Does it make a difference if you are a beginner or a senior programmer?

    Read the article

  • MySQL Datefields: duplicate or calculate?

    - by Konerak
    We are using a table with a structure imposed upon us more than 10 years ago. We are allowed to add columns, but urged not to change existing columns. Certain columns are meant to represent dates, but are put in different format. Amongst others: * CHAR(6): YYMMDD * CHAR(6): DDMMYY * CHAR(8): YYYYMMDD * CHAR(8): DDMMYYYY * DATE * DATETIME Since we now would like to do some more complex queries, using advanced date functions, my manager proposed to d*uplicate those problem columns* to a proper FORMATTED_OLDCOLUMNNAME column using a DATE or DATETIME format. Is this the way to go? Couldn't we just use the STR_TO_DATE function each time we accessed the columns? To avoid every query having to copy-paste the function, I could still work with a view or a stored procedure, but duplicating data to avoid recalculation sounds wrong. Solutions I see (I guess I prefer 2.2.1) 1. Physically duplicate columns 1.1 In the same table 1.1.1 Added by each script that does a modification (INSERT/UPDATE/REPLACE/...) 1.1.2 Maintained by a trigger on each modification 1.2 In a separate table 1.2.1 Added by each script that does a modification (INSERT/UPDATE/REPLACE/...) 1.2.2 Maintained by a trigger on each modification 2. On-demand transformation 2.1 Each query has to perform the transformation 2.1.1 Using copy-paste in the source code 2.1.2 Using a library 2.1.3 Using a STORED PROCEDURE 2.2 A view performs the transformation 2.2.1 A separate table replacing the entire table 2.2.2 A separate table just adding the date-fields for the primary keys Am I right to say it's better to recalculate than to store? And would a view be a good solution?

    Read the article

  • Are reads and (transactional) writes faster for entities of the same group than otherwise?

    - by indiehacker
    What advantage is there to designing child-parent relationships, which allow us to do writes in transactions, when there is never a real concern for consistency and contention and those sort of more complex issues? Does it make writes and reads faster? Consider my situation where there are many .png images that are referenced to one mosaic layer, and these .png images are written just once by a single user. The user can design many mosaic layers and her mosaic layers and referenced image entities are never changed/updated, they are just deleted some time in the future. Other users can come to the web project site and interactively view the mosaic layer as different layouts/configurations of the images as they play (query) with different criteria. So reads should be very fast. So there is no real worry of contention, or users conflicting with one another with writing new image entities. And because of that I am assuming there is no "requirement" for the .png image entities to be grouped by their same mosaic layer in child-parent relationship. However, perhaps, since the documentation says they are stored close to one another, if the many image entities were grouped as children to a single mosaic layer parent than this has the advantage that the writing (in transaction) and reading will happen much faster?

    Read the article

  • Peoplesoft queries - performance

    - by DBa
    Hi, I'm facing a problem with PeopleSoft queries (using Oracle backend database): when a rather complex query involving multiple records is set off by a user, PS does an enforced join of security records, thus producing SQL like this: select .... from ps_job a, PS_EMPL_SRCQRY a1, ps_table2 b, ps_sec_rcd2 b1, ps_table3 c, ps_sec_rcd3 c1 where (...security joins a-a1, b-b1, c-c1...) and (...joins of a, b and c...) and a.setid_dept = 'XYZ'; (let's assume the last condition has a high selectivity and there is an index on the column) Obviously, due to the arrangement of the conditions, first a huge join is created, written to the temp segment, and when the last condition is finally applied, only a small subset is selected. A query formulated in this way is very likely to hit the preset timeout of the APPSRV, and even of the QRYSRV. When writing the query manually, I would rather move the most selective condition to the start, thus limiting the amount of the data being handled, to a considerable level. Any ideas on how to make PS behave like this? Actually, already rewriting "Oracle-styled" SQL to ANSI SQL seems to accelerate the queries - however, PS writes Oracle-style queries... Thanks in advance DBa

    Read the article

  • PGU HTML Renderer can't render most sites

    - by None
    I am trying to make a web browser using pygame. I am using PGU for html rendering. It works fine when I visit simple sites, like example.com, but when I try and load anything more complex that uses an html form, like google, I get this error: UnboundLocalError: local variable 'e' referenced before assignment I looked in the PGU html rendering file and found this code segment: def start_input(self,attrs): r = self.attrs_to_map(attrs) params = self.map_to_params(r) #why bother #params = {} type_,name,value = r.get('type','text'),r.get('name',None),r.get('value',None) f = self.form if type_ == 'text': e = gui.Input(**params) self.map_to_connects(e,r) self.item.add(e) elif type_ == 'radio': if name not in f.groups: f.groups[name] = gui.Group(name=name) g = f.groups[name] del params['name'] e = gui.Radio(group=g,**params) self.map_to_connects(e,r) self.item.add(e) if 'checked' in r: g.value = value elif type_ == 'checkbox': if name not in f.groups: f.groups[name] = gui.Group(name=name) g = f.groups[name] del params['name'] e = gui.Checkbox(group=g,**params) self.map_to_connects(e,r) self.item.add(e) if 'checked' in r: g.value = value elif type_ == 'button': e = gui.Button(**params) self.map_to_connects(e,r) self.item.add(e) elif type_ == 'submit': e = gui.Button(**params) self.map_to_connects(e,r) self.item.add(e) elif type_ == 'file': e = gui.Input(**params) self.map_to_connects(e,r) self.item.add(e) b = gui.Button(value='Browse...') self.item.add(b) def _browse(value): d = gui.FileDialog(); d.connect(gui.CHANGE,gui.action_setvalue,(d,e)) d.open(); b.connect(gui.CLICK,_browse,None) self._locals[r.get('id',None)] = e I got the error in the last line, because e wasn't defined. I am guessing the reason for this is that the if statement that checks the type of the input and creates the e variable didn't match anything. I added a line to print the _type variable and I got 'hidden' when i tried google and apple. Is there any way to render form items that have the type 'hidden' with PGU?

    Read the article

  • Do I need a spatial index in my database?

    - by Sanoj
    I am designing an application that needs to save geometric shapes in a database. I haven't choosen the database management system yet. In my application, all database queries will have an bounding box as input, and as output I want all shapes within that database. I know that databases with a spatial index is used for this kind of application. But in my application there will not be any queries of type "give me objects nearby x/y" or other more complex queries that are useful in a GIS application. I am planning of having a database without a spatial index and have queries looking like: SELECT * FROM shapes WHERE x < max_x AND x > min_x AND y < max_y AND y > min_y And have an index on the columns x (double) and y (double). As long I can see, I don't really need a database with an spatial index, howsoever my application is close to that kind of applications. And even if I would like to have nearby queries, then I could create a big enough bounding box around that point. Or will this lead to poor performance? Do I really need a spatial database? And when is a spatial index needed?

    Read the article

  • C++ gdb GUI

    - by HappyDude
    Briefly: Does anyone know of a GUI for gdb that brings it on par or close to the feature set you get in the more recent version of Visual C++? In detail: As someone who has spent a lot of time programming in Windows, one of the larger stumbling blocks I've found whenever I have to code C++ in Linux is that debugging anything using commandline gdb takes me several times longer than it does in Visual Studio, and it does not seem to be getting better with practice. Some things are just easier or faster to express graphically. Specifically, I'm looking for a GUI that: Handles all the basics like stepping over & into code, watch variables and breakpoints Understands and can display the contents of complex & nested C++ data types Doesn't get confused by and preferably can intelligently step through templated code and data structures while displaying relevant information such as the parameter types Can handle threaded applications and switch between different threads to step through or view the state of Can handle attaching to an already-started process or reading a core dump, in addition to starting the program up in gdb If such a program does not exist, then I'd like to hear about experiences people have had with programs that meet at least some of the bullet points. Does anyone have any recommendations? Edit: Listing out the possibilities is great, and I'll take what I can get, but it would be even more helpful if you could include in your responses: (a) Whether or not you've actually used this GUI and if so, what positive/negative feedback you have about it. (b) If you know, which of the above-mentioned features are/aren't supported Lists are easy to come by, sites like this are great because you can get an idea of people's personal experiences with applications.

    Read the article

  • Accessing both stored procedure output parameters AND the result set in Entity Framework?

    - by MS.
    Is there any way of accessing both a result set and output parameters from a stored procedure added in as a function import in an Entity Framework model? I am finding that if I set the return type to "None" such that the designer generated code ends up calling base.ExecuteFunction(...) that I can access the output parameters fine after calling the function (but of course not the result set). Conversely if I set the return type in the designer to a collection of complex types then the designer generated code calls base.ExecuteFunction<T>(...) and the result set is returned as ObjectResult<T> but then the value property for the ObjectParameter instances is NULL rather than containing the proper value that I can see being passed back in Profiler. I speculate the second method is perhaps calling a DataReader and not closing it. Is this a known issue? Any work arounds or alternative approaches? Edit My code currently looks like public IEnumerable<FooBar> GetFooBars( int? param1, string param2, DateTime from, DateTime to, out DateTime? createdDate, out DateTime? deletedDate) { var createdDateParam = new ObjectParameter("CreatedDate", typeof(DateTime)); var deletedDateParam = new ObjectParameter("DeletedDate", typeof(DateTime)); var fooBars = MyContext.GetFooBars(param1, param2, from, to, createdDateParam, deletedDateParam); createdDate = (DateTime?)(createdDateParam.Value == DBNull.Value ? null : createdDateParam.Value); deletedDate = (DateTime?)(deletedDateParam.Value == DBNull.Value ? null : deletedDateParam.Value); return fooBars; }

    Read the article

  • Changing style sheets depending on useragent

    - by John Vasiliou
    <script language="Javascript"> var deviceIphone = "iPhone"; var deviceIpod = "iPod"; //Initialize our user agent string to lower case. var uagent = navigator.userAgent.toLowerCase(); //************************** // Detects if the current device is an iPhone. function DetectiPhone() { if (uagent.search(deviceIphone) > -1) {document.write('<link rel="stylesheet" type="text/css" href="ui/mobile/css/site.css">'); } etc... Above is the start of my code. I am trying to change the CSS file depending on what platform the user is using. I currently use media="screen ... " but it doesn't work with the amount of platforms I'm trying to use. I need something a lot more detailed/complex that is why I'm turning to useragents. Any ideas why the css file doesn't change on my iPhone using the above code? Better yet, any ideas on another way to change style sheets depending on the users platform/screen resolution?

    Read the article

  • R Random Data Sets within loops

    - by jugossery
    Here is what I want to do: I have a time series data frame with let us say 100 time-series of length 600 - each in one column of the data frame. I want to pick up 4 of the time-series randomly and then assign them random weights that sum up to one (ie 0.1, 0.5, 0.3, 0.1). Using those I want to compute the mean of the sum of the 4 weighted time series variables (e.g. convex combination). I want to do this let us say 100k times and store each result in the form ts1.name, ts2.name, ts3.name, ts4.name, weight1, weight2, weight3, weight4, mean so that I get a 9*100k df. I tried some things already but R is very bad with loops and I know vector oriented solutions are better because of R design. Thanks Here is what I did and I know it is horrible The df is in the form v1,v2,v2.....v100 1,5,6,.......9 2,4,6,.......10 3,5,8,.......6 2,2,8,.......2 etc e=NULL for (x in 1:100000) { s=sample(1:100,4)#pick 4 variables randomly a=sample(seq(0,1,0.01),1) b=sample(seq(0,1-a,0.01),1) c=sample(seq(0,(1-a-b),0.01),1) d=1-a-b-c e=c(a,b,c,d)#4 random weights average=mean(timeseries.df[,s]%*%t(e)) e=rbind(e,s,average)#in the end i get the 9*100k df } The procedure runs way to slow. EDIT: Thanks for the help i had,i am not used to think R and i am not very used to translate every problem into a matrix algebra equation which is what you need in R. Then the problem becomes a little bit complex if i want to calculate the standard deviation. i need the covariance matrix and i am not sure i can if/how i can pick random elements for each sample from the original timeseries.df covariance matrix then compute the sample variance (t(sampleweights)%*%sample_cov.mat%*%sampleweights) to get in the end the ts.weighted_standard_dev matrix Last question what is the best way to proceed if i want to bootstrap the original df x times and then apply the same computations to test the robustness of my datas thanks

    Read the article

  • What's the steps for SQL optimization and changes without reflect live system ?

    - by Space Cracker
    we have a big portal that build using SharePoint 2007 , asp.net 3.5 , SQL Server 2005 .. many developers work in it since 01/2008 and we are now doing huge analysis for current SQL Databases [not share-point DB ] to optimize and enhance it. The main db have about 330 table and 1720 stored procedure (SP) created from 01/2008 till now Many table names / Columns is very long and we want to short it we found SP names is written in 25 format :( , some of them are very complex and also we want to rename many SP parameters need to be renamed one of the biggest table is Registered user table, that will be spitted in more than one table for some optimization, many columns name will be changed I searched for the way that i can rename table names ,columns and i found SQL refactor tool but i still trying it .. my questions : Is SQl Refactor is the best tool for renaming ? or is there any other one ? if i want to make it manually, is there any references or best practice for that ? How can i do such changes in fast and stable way .. i search for recommendations and case studies if exist ?

    Read the article

  • Core Data Errors vs Exceptions Part 3

    - by John Gallagher
    My question is similar to this one. Background I'm creating a large number of objects in a core data store using NSOperations to speed things up. I've followed all the Core Data multithreading rules - I've got a single persistent store coordinator and a managed object context per thread that on save is merging back to the main managed object context. The Problem When the number of threads running at once is more than 1, I get the exception logged on save of my core data store: NSExceptionHandler has recorded the following exception: NSInternalInconsistencyException -- optimistic locking failure What I've Tried My code that creates new entities is quite complex - it makes entities that have relationships with other entities that could be being created in a separate thread. If I replace my object creation routine with some very simple code just making non-related entries, everything works perfectly. Initially, as well as the exceptions, I was getting a save error saying core data couldn't save due to the merge failing. I read the docs and realised I needed a merge policy on the Managed Object Context I was saving to. I set this up and as this question states, the save error goes away, but the exception remains. My Question Do I need to worry about these exceptions? If I do need to get rid of the exceptions, any ideas on how I do it?

    Read the article

  • How do I call C++/CLI (.NET) DLLs from standard, unmanaged non-.NET applications?

    - by tronjohnson
    In the unmanaged world, I was able to write a __declspec(dllexport) or, alternatively, use a .DEF file to expose a function to be able to call a DLL. (Because of name mangling in C++ for the __stdcall, I put aliases into the .DEF file so certain applications could re-use certain exported DLL functions.) Now, I am interested in being able to expose a single entry-point function from a .NET assembly, in unmanaged-fashion, but have it enter into .NET-style functions within the DLL. Is this possible, in a simple and straight-forward fashion? What I have is a third-party program that I have extended through DLLs (plugins) that implement some complex mathematics. However, the third-party program has no means for me to visualize the calculations. I want to somehow take these pre-written math functions, compile them into a separate DLL (but using C++/CLI in .NET), but then add hooks to the functions so I can render what's going on under the hood in a .NET user control. I'm not sure how to blend the .NET stuff with the unmanaged stuff, or what to Google to accomplish this task. Specific suggestions with regard to the managed/unmanaged bridge, or alternative methods to accomplish the rendering in the manner I have described would be helpful. Thank you.

    Read the article

< Previous Page | 158 159 160 161 162 163 164 165 166 167 168 169  | Next Page >