Search Results

Search found 21511 results on 861 pages for 'appstore approval process'.

Page 707/861 | < Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >

  • SQL Server CLR stored procedures in data processing tasks - good or evil?

    - by Gart
    In short - is it a good design solution to implement most of the business logic in CLR stored procedures? I have read much about them recently but I can't figure out when they should be used, what are the best practices, are they good enough or not. For example, my business application needs to parse a large fixed-length text file, extract some numbers from each line in the file, according to these numbers apply some complex business rules (involving regex matching, pattern matching against data from many tables in the database and such), and as a result of this calculation update records in the database. There is also a GUI for the user to select the file, view the results, etc. This application seems to be a good candidate to implement the classic 3-tier architecture: the Data Layer, the Logic Layer, and the GUI layer. The Data Layer would access the database The Logic Layer would run as a WCF service and implement the business rules, interacting with the Data Layer The GUI Layer would be a means of communication between the Logic Layer and the User. Now, thinking of this design, I can see that most of the business rules may be implemented in a SQL CLR and stored in SQL Server. I might store all my raw data in the database, run the processing there, and get the results. I see some advantages and disadvantages of this solution: Pros: The business logic runs close to the data, meaning less network traffic. Process all data at once, possibly utilizing parallelizm and optimal execution plan. Cons: Scattering of the business logic: some part is here, some part is there. Questionable design solution, may encounter unknown problems. Difficult to implement a progress indicator for the processing task. I would like to hear all your opinions about SQL CLR. Does anybody use it in production? Are there any problems with such design? Is it a good thing?

    Read the article

  • StackOverFlowException - but oviously NO recursion/endless loop

    - by user567706
    Hi there, I'm now blocked by this problem the entire day, read thousands of google results, but nothing seems to reflect my problem or even come near to it... i hope any of you has a push into the right direction for me. I wrote a client-server-application (so more like 2 applications) - the client collects data about his system, as well as a screenshot, serializes all this into a XML stream (the picture as a byte[]-array]) and sends this to the server in regular intervals. The server receives the stream (via tcp), deserializes the xml to an information-object and shows the information on a windows form. This process is running stable for about 20-25 minutes at a submission interval of 3 seconds. When observing the memory usage there's nothing significant to see, also kinda stable. But after these 20-25 mins the server throws a StackOverflowException at the point where it deserializes the tcp-stream, especially when setting the Image property from the byte[]-array. I thoroughly searched for recursive or endless loops, and regarding the fact that it occurs after thousands of sucessfull intervals, i could hardly imagine that. public byte[] ImageBase { get { MemoryStream ms = new MemoryStream(); _screen.Save(ms, System.Drawing.Imaging.ImageFormat.Jpeg); return ms.GetBuffer(); } set { if (_screen != null) _screen.Dispose(); //preventing well-known image memory leak MemoryStream ms = new MemoryStream(value); try { _screen = Image.FromStream(ms); //<< EXCEPTION THROWING HERE } catch (StackOverflowException ex) //thx to new CLR management this wont work anymore -.- { Console.WriteLine(ex.Message + Environment.NewLine + ex.StackTrace); } ms.Dispose(); ms = null; } } I hope that more code would be unnecessary, or it could get very complex... Please help, i have no clue at all anymore thx Chris

    Read the article

  • Printing from web pages (reports especially) with greater precision.

    - by Kabeer
    Hello. I am re-engineering a windows application to be ported to web. One area that has been worrying is 'printing'. The application is data intensive and complex reports need to be generated. The erstwhile windows application takes advantage of printer APIs and extends sophisticated control to the users. It supports functions like page break, avoiding printing on printed parts of the sheet (like letterhead), choice of layouts and orientation, etc. Please note that these setting are not done only while printing, they are part of report definition sometimes. From what I know, we cannot have this kind of control while printing web pages. I am in a process of identifying options at my disposal. While I prefer to first look into something that will help me print from raw web pages, following are other thoughts: Since reports can also be exported to .xls & .pdf versions, let user download one and print directly. This however limits my solution to the area of application that have export feature. Use Silverlight (4.0) for report layout definition and print. I think Silverlight 4.0 (in beta right now) provides adequate control over the printer. I have so far been avoiding the need of any RIA plugin. Meticulously generate reports on web with fixed dimensions. I am not sure how far this will go. Please share practices that can be applied easily in my scenario.

    Read the article

  • twitter bootstrap typeahead 2.0.4 ajax error

    - by Adam Levitt
    I have the following code which definitely returns a proper data result if I use the 2.0.0 version, but for some reason bootstrap's typeahead plugin is giving me an error. I pasted it below the code sample: <input id="test" type="text" /> $('#test').typeahead({ source: function(typeahead, query) { return $.post('/Profile/searchFor', { query: query }, function(data) { return typeahead.process(data); }); } }); When I run this example I'm getting a jQuery bug that says the following: ** item is undefined matcher(item=undefined)bootst...head.js (line 104) <br/>(?)(item=undefined)bootst...head.js (line 91) <br/>f(a=function(), b=function(), c=false)jquery....min.js (line 2) <br/>lookup(event=undefined)bootst...head.js (line 90) <br/>keyup(e=Object { originalEvent=Event keyup, type="keyup", timeStamp=145718131, more...})bootst...head.js (line 198) <br/>f()jquery....min.js (line 2) <br/>add(c=Object { originalEvent=Event keyup, type="keyup", timeStamp=145718131, <br/>more...})jquery....min.js (line 3) <br/>add(a=keyup charCode=0, keyCode=70)jquery....min.js (line 3) <br/>[Break On This Error] <br/>return item.toLowerCase().indexOf(this.query.toLowerCase())** Any thoughts? ... The same code works in the 2.0.0 version of the plugin, but fails to write to my knockout object model.

    Read the article

  • Designing a fluid Javascript interface to hide callback asynchrony

    - by Anurag
    How would I design an API to hide the asynchronous nature of AJAX and HTTP requests, or basically delay it to provide a fluid interface. To show an example from Twitter's new Anywhere API: // get @ded's first 20 statuses, filter only the tweets that // mention photography, and render each into an HTML element T.User.find('ded').timeline().first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); function filterer(status) { return status.text.match(/photography/); } vs this (asynchronous nature of each call is clearly visible) T.User.find('ded', function(user) { user.timeline(function(statuses) { statuses.first(20).filter(filterer).each(function(status) { $('div#tweets').append('<p>' + status.text + '</p>'); }); }); }); It finds the user, gets their tweet timeline, filters only the first 20 tweets, applies a custom filter, and ultimately uses the callback function to process each tweet. I am guessing that a well designed API like this should work like a query builder (think ORMs) where each function call builds the query (HTTP URL in this case), until it hits a looping function such as each/map/etc., the HTTP call is made and the passed in function becomes the callback. An easy development route would be to make each AJAX call synchronous, but that's probably not the best solution. I am interested in figuring out a way to make it asynchronous, and still hide the asynchronous nature of AJAX.

    Read the article

  • Setting the comment of a column to that of another column in Postgresql

    - by dland
    Suppose I create a table in Postgresql with a comment on a column: create table t1 ( c1 varchar(10) ); comment on column t1.c1 is 'foo'; Some time later, I decide to add another column: alter table t1 add column c2 varchar(20); I want to look up the comment contents of the first column, and associate with the new column: select comment_text from (what?) where table_name = 't1' and column_name = 'c1' The (what?) is going to be a system table, but after having looked around in pgAdmin and searching on the web I haven't learnt its name. Ideally I'd like to be able to: comment on column t1.c1 is (select ...); but I have a feeling that's stretching things a bit far. Thanks for any ideas. Update: based on the suggestions I received here, I wound up writing a program to automate the task of transferring comments, as part of a larger process of changing the datatype of a Postgresql column. You can read about that on my blog.

    Read the article

  • Using HttpClient with SSL and certificates

    - by ChrisCM
    While I've been familiar with HTTPS and the concept of SSL, I have recently begun some development and found I am a little confused. The requirement was that I write a small Java application that runs on a machine attached to a scanner. When a document is scanned this is picked up and the file (usually PDF) sent over the internet to our application server that will then process it. I've written the application using Apache Commons libraries and HTTPClient. The second requirement was to connect over SSL, requiring a certificate. Following guidance on the HTTPclient page I am using AuthSSLProtocolSocketFactory from the contributions page. The constructor can take a keystore, keystore password, truststore and truststore password. As an initial test our DBA enabled SSL on one of our development webservers and provided me with a .p12 file which when I imported into IE allows me to connect successfully. I am a bit confused between keystores and truststores and what steps I need to take using the keytool. I tried importing the p12 into a keystore file but get the error: keytool error: java.lang.Exception: Input not an X.509 certificate I followed a suggestion of importing the p12 into Internet Explorer and exporting as a .cer which I can then successfully import into a keystore. When I supply this as a keystore argument of the AuthSSLProtocolSocketFactory I get a meaningless errror, but if I try it as a truststore it seems like it reads it fine but ultimately I get Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert: bad_certificate I am unsure if I have missed some steps, I am misunderstanding SSL and mutual authentication altogether or this is mis-configuration on the server side. Can anyone provide suggestions or point me towards resources that might help me figure this out please?

    Read the article

  • How to make NAnt send an email using a real account

    - by Turro
    First of all, I have already seen this post: nant mail issues but the only answer is not satisfactory (i.e.: doesn't work for me). I am using NAnt to get latest version of source, upgrade version of the libraries and application, build the application, build the setups... all the usual things, I bet. I would like NAnt to send an email to some people confirming the conclusion of the build process; I've already checked the official (pretty ugly, IMHO) documentation for the task, but the example, once copied and customized, doesn't work. This are the NAnt target and task I'm using: <target name="sendMail" > <mail from="[email protected]" tolist="[email protected];[email protected]" subject="Subject of email" mailhost="smtp.gmail.com" message="Your new release is ready!"> </mail> </target> The error message I get is: 530 5.7.0 Must issue a STARTTLS command first. It looks like that the task was designed for use by an account whose provider doesn't need authentication; but what can I do if I must use an external smtp server which requires authentication (telling my boss I need an smtp server in house is not an option)? Can anybody help/teach me? Thanks in advance...

    Read the article

  • jQuery does not return a JSON object to Firebug when JSON is generated by PHP

    - by Keyslinger
    The contents of test.json are: {"foo": "The quick brown fox jumps over the lazy dog.","bar": "ABCDEFG","baz": [52, 97]} When I use the following jQuery.ajax() call to process the static JSON inside test.json, $.ajax({ url: 'test.json', dataType: 'json', data: '', success: function(data) { $('.result').html('<p>' + data.foo + '</p>' + '<p>' + data.baz[1] + '</p>'); } }); I get a JSON object that I can browse in Firebug. However, when using the same ajax call with the URL pointing instead to this php script: <?php $arrCoords = array('foo'=>'The quick brown fox jumps over the lazy dog.','bar'=>'ABCDEFG','baz'=>array(52,97)); echo json_encode($arrCoords); ?> which prints this identical JSON object: {"foo":"The quick brown fox jumps over the lazy dog.","bar":"ABCDEFG","baz":[52,97]} I get the proper output in the browser but Firebug only reveals only HTML. There is no JSON tab present when I expand GET request in Firebug.

    Read the article

  • How should bug tracking and help tickets integrate?

    - by Max Schmeling
    I have a little experience with bug tracking systems such as FogBugz where help tickets are issues are (or can be) bugs, and I have some experience using a bug tracking system internally completely separate from a help center system. My question is, in a company with an existing (home-grown) help center system where replacing it is not an option, how should a bug tracking system (probably Mantis) be integrated into the process? Right now help tickets get put in for issues, questions, etc and they get assigned to the appropriate person (PC Tech, Help Desk staff, or if it's an application issue they can't solve in the help desk it gets assigned to a developer). A user can put a request for small modifications or fixes to an application in a help ticket and the developer it gets assigned to will make the change at some point, apply their time to that ticket, and then close the ticket when it goes to production. We don't currently have a bug tracking system, so I'm looking into the best way to integrate one. Should we just take the help tickets and put it into the bug tracking system if it's a bug (or issue or feature request) and then close the ticket if it's not an emergency fix? We probably don't want to expose the bug tracking system to anyone else as they wouldn't know what to put in the help center system and what to put in the bug tracker... right? Any thoughts? Suggestions? Tips? Advice? To-dos? Not to-dos? etc...

    Read the article

  • Why does my Delphi program's memory continue to grow?

    - by lkessler
    I am using Delphi 2009 which has the FastMM4 memory manager built into it. My program reads in and processes a large dataset. All memory is freed correctly whenever I clear the dataset or exit the program. It has no memory leaks at all. Using the CurrentMemoryUsage routine given in spenwarr's answer to: http://stackoverflow.com/questions/437683/how-to-get-the-memory-used-by-a-delphi-program, I have displayed the memory used by FastMM4 during processing. What seems to be happening is that memory is use is growing after every process and release cycle. e.g.: 1,456 KB used after starting my program with no dataset. 218,455 KB used after loading a large dataset. 71,994 KB after clearing the dataset completely. If I exit at this point (or any point in my example), no memory leaks are reported. 271,905 KB used after loading the same dataset again. 125,443 KB after clearing the dataset completely. 325,519 KB used after loading the same dataset again. 179,059 KB after clearing the dataset completely. 378,752 KB used after loading the same dataset again. It seems that my program's memory use is growing by about 53,400 KB upon each load/clear cycle. Task Manager confirms that this is actually happening. I have heard that FastMM4 does not always release all of the program's memory back to the Operating system when objects are freed so that it can keep some memory around when it needs more. But this continual growing bothers me. Since no memory leaks are reported, I can't identify a problem. Does anyone know why this is happening, if it is bad, and if there is anything I can or should do about it?

    Read the article

  • Can't get MySQL source query to work using Python mysqldb module

    - by Chris
    I have the following lines of code: sql = "source C:\\My Dropbox\\workspace\\projects\\hosted_inv\\create_site_db.sql" cursor.execute (sql) When I execute my program, I get the following error: Error 1064: You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'source C:\My Dropbox\workspace\projects\hosted_inv\create_site_db.sql' at line 1 Now I can copy and past the following into mysql as a query: source C:\\My Dropbox\\workspace\\projects\\hosted_inv\\create_site_db.sql And it works perfect. When I check the query log for the query executed by my script, it shows that my query was the following: source C:\\My Dropbox\\workspace\\projects\\hosted_inv\\create_site_db.sql However, when I manually paste it in and execute, the entire create_site_db.sql gets expanded in the query log and it shows all the sql queries in that file. Am I missing something here on how mysqldb does queries? Am I running into a limitation. My goal is to run a sql script to create the schema structure, but I don't want to have to call mysql in a shell process to source the sql file. Any thoughts? Thanks!

    Read the article

  • Media Kind in iTunes COM for Windows SDK

    - by Joel Verhagen
    I recently found out about the awesomeness of the iTunes COM for Windows SDK. I am using Python with win32com to talk to my iTunes library. Needless to say, my head is in the process of exploding. This API rocks. I have one issue though, how do I access the Media Kind attribute of the track? I looked through the help file provided in the SDK and saw no sign of it. If you go into iTunes, you can modify the track's media kind. This way if you have an audiobook that is showing up in your music library, you can set the Media Kind to Audiobook and it will appear in the Books section in iTunes. Pretty nifty. The reason I ask is because I have a whole crap load of audiobooks that are showing up in my LibraryPlaylist. Here is my code thus far. import win32com.client iTunes = win32com.client.gencache.EnsureDispatch('iTunes.Application') track = win32com.client.CastTo(iTunes.LibraryPlaylist.Tracks.Item(1), 'IITFileOrCDTrack') print track.Artist, '-', track.Name print print 'Is this track an audiobook?' print 'How the hell should I know?' Thanks in advance.

    Read the article

  • python-social-auth AuthCanceled exception

    - by vero4ka
    I'm using python-social-auth in my Django application for authentication via Facebook. But when a user tries to login and when it's been refirected to Facebook app page clicks on "Cancel" button, appears the following exception: ERROR 2014-01-03 15:32:15,308 base :: Internal Server Error: /complete/facebook/ Traceback (most recent call last): File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 114, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 57, in wrapped_view return view_func(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/utils.py", line 45, in wrapper return func(request, backend, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/views.py", line 21, in complete redirect_name=REDIRECT_FIELD_NAME, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/actions.py", line 54, in do_complete *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/strategies/base.py", line 62, in complete return self.backend.auth_complete(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 63, in auth_complete self.process_error(self.data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 56, in process_error super(FacebookOAuth2, self).process_error(data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/oauth.py", line 312, in process_error raise AuthCanceled(self, data.get('error_description', '')) AuthCanceled: Authentication process canceled Is the any way to catch it Django?

    Read the article

  • Payapl sandbox a/c in Dotnet..IPN Response Invaild

    - by Sam
    Hi, I am Integrating paypal to mysite.. i use sandbox account,One Buyer a/c and one more for seller a/c...and downloaded the below code from paypal site string strSandbox = "https://www.sandbox.paypal.com/cgi-bin/webscr"; HttpWebRequest req = (HttpWebRequest)WebRequest.Create(strSandbox); //Set values for the request back req.Method = "POST"; req.ContentType = "application/x-www-form-urlencoded"; byte[] param = Request.BinaryRead(HttpContext.Current.Request.ContentLength); string strRequest = Encoding.ASCII.GetString(param); strRequest += "&cmd=_notify-validate"; req.ContentLength = strRequest.Length; //for proxy //WebProxy proxy = new WebProxy(new Uri("http://url:port#")); //req.Proxy = proxy; //Send the request to PayPal and get the response StreamWriter streamOut = new StreamWriter(req.GetRequestStream(), System.Text.Encoding.ASCII); streamOut.Write(strRequest); streamOut.Close(); StreamReader streamIn = new StreamReader(req.GetResponse().GetResponseStream()); string strResponse = streamIn.ReadToEnd(); streamIn.Close(); if (strResponse == "VERIFIED") { //check the payment_status is Completed //check that txn_id has not been previously processed //check that receiver_email is your Primary PayPal email //check that payment_amount/payment_currency are correct //process payment } else if (strResponse == "INVALID") { //log for manual investigation } else { //log response/ipn data for manual investigation } and when add this snippets in pageload event of success page i get the ipn response as INVALID but amount paid successfully but i am getting invalid..any help..Paypal Docs in not Clear. thanks in advance

    Read the article

  • Nested URLs and Rewrite rules in Apache2

    - by Radha Krishna. S.
    Hi, I need some help with rewrite rules and nested URLs. I am using TikiWiki for my website and am in the process of setting up SE friendly URLs for my projects. Specifically, I have the following rewrite rule for www.example.com/projects to point to a page that lists out all the projects hosted in example. RewriteRule ^Projects$ articles?type=Project [L] This works fine. Now, I would like to point www.example.com/projects/project1 to point to a specific project. I have this rewrite rule RewriteRule ^(Projects/Project1)$ tiki-read_article.php?articleId=6 This works, but partially. The content is all rendered as text but the theme - images/ css etc all go for a toss - the page is completely in text. I understand that this happens 'cause the relative paths in the theme/ css/ images all refer to Projects as the base folder instead of the root of the website. I don't want to touch the CMS portion - change the theme/ css/ image paths in the files, more for reasons of upgradability. Can someone help me understand and write a rule so that the above nested URL works? Regards, Radha

    Read the article

  • Writing to a log4net FileAppender with multiple threads performance problems

    - by Wayne
    TickZoom is a very high performance app which uses it's own parallelization library and multiple O/S threads for smooth utilization of multi-core computers. The app hits a bottleneck where users need to write information to a LogAppender from separate O/S threads. The FileAppender uses the MinimalLock feature so that each thread can lock and write to the file and then release it for the next thread to write. If MinimalLock gets disabled, log4net reports errors about the file being already locked by another process (thread). A better way for log4net to do this would be to have a single thread that takes care of writing to the FileAppender and any other threads simply add their messages to a queue. In that way, MinimalLock could be disabled to greatly improve performance of logging. Additionally, the application does a lot of CPU intensive work so it will also improve performance to use a separate thread for writing to the file so the CPU never waits on the I/O to complete. So the question is, does log4net already offer this feature? If so, how do you do enable threaded writing to a file? Is there another, more advanced appender, perhaps? If not, then since log4net is already wrapped in the platform, that makes it possible to implement a separate thread and queue for this purpose in the TickZoom code. Sincerely, Wayne

    Read the article

  • can't write to physical drive in win 7??

    - by matt
    I wrote a disk utility that allowed you to erase whole physical drives. it uses the windows file api, calling : destFile = CreateFile("\\.\PhysicalDrive1", GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING,createflags, NULL); and then just calling WriteFile, and making sure you write in multiples of sectors, i.e. 512 bytes. this worked fine in the past, on XP, and even on the Win7 RC, all you have to do is make sure you are running it as an administrator. but now I have retail Win7 professional, it doesn't work anymore! the drives still open fine for writing, but calling WriteFile on the successfully opened Drive now fails! does anyone know why this might be? could it have something to do with opening it with shared flags? this is always what I have done before, and its worked. could it be that something is now sharing the drive? blocking the writes? is there some way to properly "unmount" A drive, or atleast the partitions on it so that I would have exclusive access to it? some other tools that used to work don't any more either, but some do, like the WD Diagnostic's erase functionality. and after it has erased the drive, my tool then works on it too! leading me to belive there is some "unmount" process I need to be doing to the drive first, to free up permission to write to it. Any ideas?

    Read the article

  • How to continuously scroll content within a DIV using jQuery?

    - by Camsoft
    Aim The aim is to a have a container DIV with a fixed height and width and have the HTML content within that DIV automatically scroll vertically continuously. Question Basically I've created the code below using jQuery to scroll (move) the child DIV vertically upwards until its outside the bounding parent box where the animation then completes which triggers an event handler which resets the position of the child DIV and starts the process again. This works fine, so the content scrolls up leaving a blank space and then starts from the bottom again and scrolls up. The problem I have is that the requirements for this is for the content to appear as if it was continuously repeating, see below diagram to better explain this, is there a way to do this? (I don't want to use 3rd party plug ins or libraries other than jQuery): What I have so far The HTML: <div id="scrollingContainer"> <div class="scroller"> <h1>This is a title</h1> <p>Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse at orci mi, id gravida tellus. Integer malesuada ante sit amet enim pulvinar congue. Donec pulvinar dolor et arcu posuere feugiat id et felis.</p> <p>More content....</p> </div> </div> The CSS: #scrollingContainer{ height: 300px; width: 300px; overflow: hidden; } #scrollingContainer DIV.scroller{ position: relative; } The JavaScript: /** * Scrolls the content DIV */ function scroll() { if($('DIV.scroller').height() > $('#scrollingContainer').height()) { var t = $('DIV.scroller').position().top + $('DIV.scroller').height(); /* Animate */ $('DIV.scroller').animate( { top: '-=' + t + 'px' } , 4000, 'linear', animationComplete); } } function animationComplete() { $(this).css('top', $('#scrollingContainer').height()); scroll(); }

    Read the article

  • Eclipse PDE - Plug-in, Feature, and Product Versioning

    - by Michael
    I am having much confusion over the process of upgrading version numbers in dependent plug-ins, features, and products in a fairly large eclipse workspace. I have made API changes to java code residing in an existing plug-in and thus requires an increase of the Major part of the version identifier. This plug-in serves as a dependency to a given feature, where the feature is later included in a product. From the documentation at http://wiki.eclipse.org/Version_Numbering, I understand (for the most part) when the proper number should be increased on the containing plug-in itself. However, how would this Major version number change on the plug-in affect dependent, "down-the-line" items (e.g., features, products)? For example, assume we have the typical "Hello World" setup as follows: Plug-in: com.example.helloworld, version 1.0.0 Feature: com.example.helloworld.feature, version 1.0.0 Product: com.example.helloworld.product, version 1.0.0 If I were to make an API change in the plug-in, this would require a version update to be that of 2.0.0. What would then be the version of the feature, 1.1.0? The same question can be applied for the product level as well (e.g., if the feature is 1.1.0 OR 2.0.0, what is the product version number)? I'm sure this is quite the newbie question so I apologize for wasting anyone's time and effort. I have searched for this type of content but all I am finding is are examples showing how to develop a plug-in, feature, product, and update site for the first time. The only other content related to my search has been developing feature patches and have not touched on the versioning aspect as much as I would prefer. I am having difficulty coming into (for the first time) an Eclipse RCP / PDE environment and need to learn the proper way and / or best practices for making such versioning updates and how to best reflect this throughout other dependent projects in the workspace.

    Read the article

  • Is SQLDataReader slower than using the command line utility sqlcmd?

    - by Andrew
    I was recently advocating to a colleague that we replace some C# code that uses the sqlcmd command line utility with a SqlDataReader. The old code uses: System.Diagnostics.ProcessStartInfo procStartInfo = new System.Diagnostics.ProcessStartInfo("cmd", "/c " + sqlCmd); wher sqlCmd is something like "sqlcmd -S " + serverName + " -y 0 -h-1 -Q " + "\"" + "USE [" + database + "]" + ";+ txtQuery.Text +"\"";\ The results are then parsed using regular expressions. I argued that using a SQLDataReader woud be more in line with industry practices, easier to debug and maintain and probably faster. However, the SQLDataReader approach is at least the same speed and quite possibly slower. I believe I'm doing everything correctly with SQLDataReader. The code is: using (SqlConnection connection = new SqlConnection()) { try { SqlConnectionStringBuilder builder = new SqlConnectionStringBuilder(connectionString); connection.ConnectionString = builder.ToString(); ; SqlCommand command = new SqlCommand(queryString, connection); connection.Open(); SqlDataReader reader = command.ExecuteReader(); // do stuff w/ reader reader.Close(); } catch (Exception ex) { outputMessage += (ex.Message); } } I've used System.Diagnostics.Stopwatch to time both approaches and the command line utility (called from C# code) does seem faster (20-40%?). The SqlDataReader has the neat feature that when the same code is called again, it's lightening fast, but for this application we don't anticipate that. I have already done some research on this problem. I note that the command line utility sqlcmd uses OLE DB technology to hit the database. Is that faster than ADO.NET? I'm really suprised, especially since the command line utility approach involves starting up a process. I really thought it would be slower. Any thoughts? Thanks, Dave

    Read the article

  • make local only daemon listening on different interface (using iptables port forwarding)?

    - by UniIsland
    i have a daemon program which listens on 127.0.0.1:8000. i need to access it when i connect to my box with vpn. so i want it to listen on the ppp0 interface too. i've tried the "ssh -L" method. it works, but i don't think it's the right way to do that, having an extra ssh process running in the background. i tried the "netcat" method. it exits when the connection is closed. so not a valid way for "listening". i also tried several iptables rules. none of them worked. i'm not listing here all the rules i've used. iptables -A FORWARD -j ACCEPT iptables -t nat -A PREROUTING -i ppp+ -p tcp --dport 8000 -j DNAT --to-destination 127.0.0.1:8000 the above ruleset doesn't work. i have net.ipv4.ip_forward set to 1. anyone knows how to redirect traffic from ppp interface to lo? say, listen on "192.168.45.1:8000 (ppp0)" as well as "127.0.0.1:8000 (lo)" there's no need to alter the port. thanx

    Read the article

  • Keys and terminology

    - by nabbed
    I have a predicament related to terminology. Our system processes events. Events are dispatched to a node based on the value of some field (or set of fields). We call this set of fields the key. We call the value of that set of fields the key value. What adds confusion is that each event is essentially a bag of key-value pairs (i.e., a hash map). So the word key is used for two different purposes: 1) to describe the set of fields on which the event is dispatched, and 2) as a field name. So if you had a collection of key-value pairs, and a set of those key-value pairs made up a database-style key, what terminology would you use to distinguish those two? (One further complication is that the key on which the event is dispatched is not always unique. For instance, if we dispatch on userid, and that user performs multiple actions, we will process multiple events with the same userid value. So maybe key is the wrong word to describe the set of fields on which we dispatch an event).

    Read the article

  • Sync Vs. Async Sockets Performance in .NET

    - by Michael Covelli
    Everything that I read about sockets in .NET says that the asynchronous pattern gives better performance (especially with the new SocketAsyncEventArgs which saves on the allocation). I think this makes sense if we're talking about a server with many client connections where its not possible to allocate one thread per connection. Then I can see the advantage of using the ThreadPool threads and getting async callbacks on them. But in my app, I'm the client and I just need to listen to one server sending market tick data over one tcp connection. Right now, I create a single thread, set the priority to Highest, and call Socket.Receive() with it. My thread blocks on this call and wakes up once new data arrives. If I were to switch this to an async pattern so that I get a callback when there's new data, I see two issues The threadpool threads will have default priority so it seems they will be strictly worse than my own thread which has Highest priority. I'll still have to send everything through a single thread at some point. Say that I get N callbacks at almost the same time on N different threadpool threads notifying me that there's new data. The N byte arrays that they deliver can't be processed on the threadpool threads because there's no guarantee that they represent N unique market data messages because TCP is stream based. I'll have to lock and put the bytes into an array anyway and signal some other thread that can process what's in the array. So I'm not sure what having N threadpool threads is buying me. Am I thinking about this wrong? Is there a reason to use the Async patter in my specific case of one client connected to one server?

    Read the article

  • Publishing to live website

    - by Alienfluid
    Hey there, My friend and I are collaborating on a ASP.NET powered website. To develop it locally, we use Visual Web Developer Express (good enough for our needs). Subversion (using Tortoise SVN) is our source control of choice with the repository residing on Unfuddle.com. We run into problems when we need to update the live site - since there's no version control on it. Currently we use the "Copy to Website" feature in VWD which copies the files using FTP. Here are some problems: VWD only keeps track of files uploaded by one user, so if the other user uploads a newer version of a file to the live site, VWD on my side cannot tell whether the live version of the file is newer or mine is. There's no way to tell whether all the latest changes are available on the live site. We have to be careful not to party all over the shared web.config file since the other user's local DB settings are different from mine, and of course, the live DB settings are a whole other story! What do you guys use to publish to a live site? Does anything out there tie into Subversion so that we can automate the process and always guarantee that the live site is synced to a change list number? Also, how do you manage the different web.config file settings? Thanks!

    Read the article

< Previous Page | 703 704 705 706 707 708 709 710 711 712 713 714  | Next Page >