Search Results

Search found 10384 results on 416 pages for 'plan cache'.

Page 351/416 | < Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >

  • Browser freeze while ajax call in action

    - by kaivalya
    I have a ASP.NET Web App. I notice that while a simple ajax call(see below) is in process, web application does not respond to any action that I try on a different browser. $.ajax({ type: "GET", async: true, url: "someurl", dataType: "text", cache: false, success: function(msg){ CheckResponse(msg); } }); This happens when I open two firefox or two IE. I run the function that does the ajax call on first browser and till the response of the ajax is returned, I cannot do anything on the second browser on the same site. No breakpoints are hit on the server from the second browser till initial ajax is completed. It hangs for any click etc.. The hang on the second browser ends immediately after the ajax call is completed on the first one. This behavior is not observed if I try the same on IE and Firefox side by side. Only happen with IE & IE or FF & FF side by side Appreciate if you can help me see what I am missing here.

    Read the article

  • How to configure SQLite to run with NHibernate where assembly resolves System.Data.SQLite?

    - by Michael Hedgpeth
    I am using the latest NHibernate 2.1.0Beta2. I'm trying to unit test with SQLite and have the configuration set up as: Dictionary<string, string> properties = new Dictionary<string, string>(); properties.Add("connection.driver_class", "NHibernate.Driver.SQLite20Driver"); properties.Add("dialect", "NHibernate.Dialect.SQLiteDialect"); properties.Add("connection.provider", "NHibernate.Connection.DriverConnectionProvider"); properties.Add("query.substitutions", "true=1;false=0"); properties.Add("connection.connection_string", "Data Source=test.db;Version=3;New=True;"); properties.Add("proxyfactory.factory_class", "NHibernate.ByteCode.LinFu.ProxyFactoryFactory, NHibernate.ByteCode.LinFu"); configuration = new Configuration(); configuration.SetProperties(properties); When I try to run it, I get the following error: NHibernate.HibernateException: The IDbCommand and IDbConnection implementation in the assembly System.Data.SQLite could not be found. Ensure that the assembly System.Data.SQLite is located in the application directory or in the Global Assembly Cache. If the assembly is in the GAC, use <qualifyAssembly/> element in the application configuration file to specify the full name of the assembly. at NHibernate.Driver.ReflectionBasedDriver..ctor(String driverAssemblyName, String connectionTypeName, String commandTypeName) in c:\CSharp\NH\nhibernate\src\NHibernate\Driver\ReflectionBasedDriver.cs: line 26 at NHibernate.Driver.SQLite20Driver..ctor() in c:\CSharp\NH\nhibernate\src\NHibernate\Driver\SQLite20Driver.cs: line 28 So it looks like I need to reference the assembly directly. How would I do this so I don't get this error anymore? I downloaded the latest assembly from here: http://sourceforge.net/projects/sqlite-dotnet2.

    Read the article

  • Building Reducisaurus URLs

    - by Alix Axel
    I'm trying to use Reducisaurus Web Service to minify CSS and Javascript but I've run into a problem... Suppose I've two unminified CSS at: http:/domain.com/dynamic/styles/theme.php?color=red http:/domain.com/dynamic/styles/typography.php?font=Arial According to the docs I should call the web service like this: http:/reducisaurus.appspot.com/css?url=http:/domain.com/dynamic/styles/theme.php?color=red And if I want to minify both CSS files at once: http:/reducisaurus.appspot.com/css?url1=http:/domain.com/dynamic/styles/theme.php?color=red&url2=http:/domain.com/dynamic/styles/theme.php?color=red If I wanted to specify a different number of seconds for the cache (3600 for instance) I would use: http:/reducisaurus.appspot.com/css?url=http:/domain.com/dynamic/styles/theme.php?color=red&expire_urls=3600 And again for both CSS files at once: http:/reducisaurus.appspot.com/css?url1=http:/domain.com/dynamic/styles/theme.php?color=red&url2=http:/domain.com/dynamic/styles/theme.php?color=red&expire_urls=3600 Now my question is, how does Reducisaurus knows how to separate the URLs I want? How does it know that &expire_urls=3600 is not part of my URL? And how does it know that &url2=... is not a GET argument of url1? I'm I doing this right? Do I need to urlencode my URLs? I took a peek into the source code and although my Java is very poor it seems that the methods acquireFromRemoteUrl() and getSortedParameterNames() from the BaseServlet.java file hold the answers to my question - if a GET argument name contains - or _ they should be ignored?! What about multiple &url(n)s?

    Read the article

  • Where is my python script spending time? Is there "missing time" in my cprofile / pstats trace?

    - by fmark
    I am attempting to profile a long running python script. The script does some spatial analysis on raster GIS data set using the gdal module. The script currently uses three files, the main script which loops over the raster pixels called find_pixel_pairs.py, a simple cache in lrucache.py and some misc classes in utils.py. I have profiled the code on a moderate sized dataset. pstats returns: p.sort_stats('cumulative').print_stats(20) Thu May 6 19:16:50 2010 phes.profile 355483738 function calls in 11644.421 CPU seconds Ordered by: cumulative time List reduced from 86 to 20 due to restriction <20> ncalls tottime percall cumtime percall filename:lineno(function) 1 0.008 0.008 11644.421 11644.421 <string>:1(<module>) 1 11064.926 11064.926 11644.413 11644.413 find_pixel_pairs.py:49(phes) 340135349 544.143 0.000 572.481 0.000 utils.py:173(extent_iterator) 8831020 18.492 0.000 18.492 0.000 {range} 231922 3.414 0.000 8.128 0.000 utils.py:152(get_block_in_bands) 142739 1.303 0.000 4.173 0.000 utils.py:97(search_extent_rect) 745181 1.936 0.000 2.500 0.000 find_pixel_pairs.py:40(is_no_data) 285478 1.801 0.000 2.271 0.000 utils.py:98(intify) 231922 1.198 0.000 2.013 0.000 utils.py:116(block_to_pixel_extent) 695766 1.990 0.000 1.990 0.000 lrucache.py:42(get) 1213166 1.265 0.000 1.265 0.000 {min} 1031737 1.034 0.000 1.034 0.000 {isinstance} 142740 0.563 0.000 0.909 0.000 utils.py:122(find_block_extent) 463844 0.611 0.000 0.611 0.000 utils.py:112(block_to_pixel_coord) 745274 0.565 0.000 0.565 0.000 {method 'append' of 'list' objects} 285478 0.346 0.000 0.346 0.000 {max} 285480 0.346 0.000 0.346 0.000 utils.py:109(pixel_coord_to_block_coord) 324 0.002 0.000 0.188 0.001 utils.py:27(__init__) 324 0.016 0.000 0.186 0.001 gdal.py:848(ReadAsArray) 1 0.000 0.000 0.160 0.160 utils.py:50(__init__) The top two calls contain the main loop - the entire analyis. The remaining calls sum to less than 625 of the 11644 seconds. Where are the remaining 11,000 seconds spent? Is it all within the main loop of find_pixel_pairs.py? If so, can I find out which lines of code are taking most of the time?

    Read the article

  • With regards to urllib AttributeError: 'module' object has no attribute 'urlopen'

    - by Matt
    import re import string import shutil import os import os.path import time import datetime import math import urllib from array import array import random filehandle = urllib.urlopen('http://www.google.com/') #open webpage s = filehandle.read() #read print s #display #what i plan to do with it once i get the first part working #results = re.findall('[<td style="font-weight:bold;" nowrap>$][0-9][0-9][0-9][.][0-9][0-9][</td></tr></tfoot></table>]',s) #earnings = '$ ' #for money in results: #earnings = earnings + money[1]+money[2]+money[3]+'.'+money[5]+money[6] #print earnings #raw_input() this is the code that i have so far. now i have looked at all the other forums that give solutions such as the name of the script, which is parse_Money.py, and i have tried doing it with urllib.request.urlopen AND i have tried running it on python 2.5, 2.6, and 2.7. If anybody has any suggestions it would be really welcome, thanks everyone!! --Matt ---EDIT--- I also tried this code and it worked, so im thinking its some kind of syntax error, so if anybody with a sharp eye can point it out, i would be very appreciative. import shutil import os import os.path import time import datetime import math import urllib from array import array import random b = 3 #find URL URL = raw_input('Type the URL you would like to read from[Example: http://www.google.com/] :') while b == 3: #get file name file1 = raw_input('Enter a file name for the downloaded code:') filepath = file1 + '.txt' if os.path.isfile(filepath): print 'File already exists' b = 3 else: print 'Filename accepted' b = 4 file_path = filepath #open file FileWrite = open(file_path, 'a') #acces URL filehandle = urllib.urlopen(URL) #display souce code for lines in filehandle.readlines(): FileWrite.write(lines) print lines print 'The above has been saved in both a text and html file' #close files filehandle.close() FileWrite.close()

    Read the article

  • Swing: what to do when a JTree update takes too long and freezes other GUI elements?

    - by java.is.for.desktop
    Hello, everyone! I know that GUI code in Java Swing must be put inside SwingUtilities.invokeAndWait or SwingUtilities.invokeLater. This way threading works fine. Sadly, in my situation, the GUI update it that thing which takes much longer than background thread(s). More specific: I update a JTree with about just 400 entries, nesting depth is maximum 4, so should be nothing scary, right? But it takes sometimes one second! I need to ensure that the user is able to type in a JTextPane without delays. Well, guess what, the slow JTree updates do cause delays for JTextPane during input. It refreshes only as soon as the tree gets updated. I am using Netbeans and know empirically that a Java app can update lots of information without freezing the rest of the UI. How can it be done? NOTE 1: All those DefaultMutableTreeNodes are prepared outside the invokeAndWait. NOTE 2: When I replace invokeAndWait with invokeLater the tree doesn't get updated. NOTE 3: Fond out that recursive tree expansion takes far the most time. NOTE 4: I'm using custom tree cell renderer, will try without and report. NOTE 4a: My tree cell renderer uses a map to cache and reuse created JTextComponents, depending on tree node (as a key). CLUE 1: Wow! Without setting custom cell renderer it's 10 times faster. I think, I'll need few good tutorials on writing custom tree cell renderers.

    Read the article

  • Are short tags *that* bad?

    - by Col. Shrapnel
    Everyone here on SO says it's bad, for 3 main reasons: XML is used everywhere You can't control where your script is going to be run short tags are going to be removed in PHP6 But let's see closer at them: Last one is easy - it's just not true. XML is not really a problem. If you want to use short tags, it won't be a problem for you to write a single tag using PHP echo, <?="<?XML...?>"?>. Anyway, why not to leave such a trifle thing for one's own judgment? PHP configuration access. Oh yes, the only thing that can be really considered as a reason. Of course in case you plan to spread your code wide. But if you don't? I think most of scripts being written not for the wide spread, but just for one place. If you can use short tags in that place - why to abandon them? Anyway, it's the only template we are talking about. If you don't like short tags, PHP native template is probably not for you, why not to try Smarty then? Well, the question is: is there any other real reasons to abandon short tags and make it strict recommendation? Or, as it was said, better to leave short tags alone?

    Read the article

  • Maven Mojo & SCM Plugin: Check for a valid working directory

    - by Patrick Bergner
    Hi there. I'm using maven-scm-plugin from within a Mojo and am trying to figure out how to determine if a directory D is a valid working directory of an SCM URL U (i.e. a checkout of U to D already happened). The context is that I want to do a checkout of U if D is a working set or do an update if it isn't. The plan is to check out U to D if D does not exist, update D if D is a valid working directory of U, display an error if D exists and is not a valid working directory of U. What I tried is to call ScmManager.status(), ScmManager.list() and ScmManager.changelog() and try to guess something from their results. But that didn't work. The results from status and changelog always return something positive (isSuccess() = true, getChangedFiles() = valid List, no exceptions to catch), whereas list throws an exception in any case. ScmRepository and ScmFileSet don't seem to provide suitable methods as well. An option would be to always do an update if D exists but then I cannot tell if it is a working directory of U or any SCM working directory at all. The ideal solution would be independent of the actual SCM system and a specific ScmVersion. Thanks for your help! Patrick

    Read the article

  • Choosing the right web service

    - by Ratan Sharma
    My website currently working in ASP.NET 1.1 Old Process In our database we have huge amount of data stored for a decoding purpose. We have to update this huge set of data table each week(Data is supplied from a vendor). In our website (in asp.net 1.1) we query our database to decode information. New process Now instead of storing data in our database and query them, we want to replace this through the web service, AS now the vendor is supplying us a DLL, which will give us the decoded information. Information on the DLL provided by the vendor The DLL provided, can only be added in 4.0 sites. SO that also impleies that i can not directly add the dll to my 1.1 site. This DLL is exposing certain methods, we simply have to add the DLL refernce in our web service and call the method and fetch the needed information. Thus we will not have to store those information in our database. So which type of web service I should go for (asmx OR WCF) that will use the DLLs provided by vendor to fetch the decoded information ?? Flexibility i am looking for in the web service are: It can be consumed from asp.net 1.1 site directly and also using jQuery ajax. It can be consumed from other web services running on the server. It can be consumed from some windows services running from the server. NOTE : Moreover we have a plan to migrate our website from asp.net 1.1 to 4.0 version in future.So it should be that much supportive for future upgrade.

    Read the article

  • Using a class within a class?

    - by Josh
    I built myself a MySQL class which I use in all my projects. I'm about to start a project that is heavily based on user accounts and I plan on building my own class for this aswell. The thing is, a lot of the methods in the user class will be MySQL queries and methods from the MySQL class. For example, I have a method in my user class to update someone's password: class user() { function updatePassword($usrName, $newPass) { $con = mysql_connect('db_host', 'db_user', 'db_pass'); $sql = "UPDATE users SET password = '$newPass' WHERE username = '$userName'"; $res = mysql_query($sql, $con); if($res) return true; mysql_close($con); } } (I kind of rushed this so excuse any syntax errors :) ) As you can see that would use MySQL to update a users password, but without using my MySQL class, is this correct? I see no way in which I can use my MySQL class within my users class without it seeming dirty. Do I just use it the normal way like $DB = new DB();? That would mean including my mysql.class.php somewhere too... I'm quite confused about this, so any help would be appreciated, thanks.

    Read the article

  • Best practices for managing updating a database with a complex set of changes

    - by Sarge
    I am writing an application where I have some publicly available information in a database which I want the users to be able to edit. The information is not textual like a wiki but is similar in concept because the edits bring the public information increasingly closer to the truth. The changes will affect multiple tables and the update needs to be automatically checked before affecting the public tables. I'm working on the design and I'm wondering if there are any best practices that might help with some particular issues. I want to provide undo capability. I want to show the user the combined result of all their changes. When the user says they're done, I need to check the underlying public data to make sure it hasn't been changed by somebody else. My current plan is to have the user work in a set of tables setup to be a private working area. Once they're ready they can kick off a process to check everything and update the public tables. Undo can be recorded using Command pattern saving to a table. Are there any techniques I might have missed or useful papers or patterns? Thanks in advance!

    Read the article

  • "filter" json through ajax (jquery)

    - by Hyung Suh
    I'm trying to filter the json array through ajax and not sure how to do so. { posts: [{"image":"images/bbtv.jpg", "group":"a"}, {"image":"images/grow.jpg", "group":"b"}, {"image":"images/tabs.jpg", "group":"a"}, {"image":"images/bia.jpg", "group":"b"}]} i want it so that i can only show items in group A or group B. how would i have to change my ajax to filter through the content? $.ajax({ type: "GET", url: "category/all.js", dataType: "json", cache: false, contentType: "application/json", success: function(data) { $('#folio').html("<ul/>"); $.each(data.posts, function(i,post){ $('#folio ul').append('<li><div class="boxgrid captionfull"><img src="' + post.image + '" /></div></li>'); }); initBinding(); }, error: function(xhr, status, error) { alert(xhr.status); } }); Also, how can I can I make each link process the filter? <a href="category/all.js">Group A</a> <a href="category/all.js">Group B</a> Sorry for all these questions, can't seem to find a solution.. Any help in the right direction would be appreciated. Thanks!

    Read the article

  • Can a large transaction log cause cpu hikes to occur

    - by Simon Rigby
    Hello all, I have a client with a very large database on Sql Server 2005. The total space allocated to the db is 15Gb with roughly 5Gb to the db and 10 Gb to the transaction log. Just recently a web application that is connecting to that db is timing out. I have traced the actions on the web page and examined the queries that execute whilst these web operation are performed. There is nothing untoward in the execution plan. The query itself used multiple joins but completes very quickly. However, the db server's CPU hikes to 100% for a few seconds. The issue occurs when several simultaneous users are working on the system (when I say multiple .. read about 5). Under this timeouts start to occur. I suppose my question is, can a large transaction log cause issues with CPU performance? There is about 12Gb of free space on the disk currently. The configuration is a little out of my hands but the db and log are both on the same physical disk. I appreciate that the log file is massive and needs attending to, but I'm just looking for a heads up as to whether this may cause CPU spikes (ie trying to find the correlation). The timeouts are a recent thing and this app has been responsive for a few years (ie its a recent manifestation). Many Thanks,

    Read the article

  • FTP FileWatcher

    - by Meiscooldude
    So, I am in this little predicament where I am stuck watching a few ftp folders to see if they have new files added to them. If they do, it needs to throw an event with the file name. Thereby telling something else to download that file. This is a pretty simple object to make, I was just curious if anyone knew how expensive this operation would be? I plan on using the command NLIST because I don't need file size information, and there will be no sub-directories in the folder. Each file in the folder will have exactly 25 characters in its name. There could be anywhere from 10 to 'maybe' a couple thousand (max around 2000) files per folder (usually on the lower end, 100-300, but currently growing). The files are anywhere from 250kb to a very VERY unlikely 10mb (usually within the 250kb to 4mb range). There possibly could be up to a few hundred folders (in which case I could change the watch frequency depending on number of folders), but currently there are only a few (6-10ish). There also would be multiple logins for the ftp server, different logins would have access to different folders. I am not asking for an implementation, just if anyone has some first or second hand knowledge about FTP, how could this affect my network. I am not opposed to putting in file retention times or change the frequency in which I check for new files.

    Read the article

  • mysql to excel generation using php

    - by pmms
    <?php // DB Connection here mysql_connect("localhost","root",""); mysql_select_db("hitnrunf_db"); $select = "SELECT * FROM jos_users "; $export = mysql_query ( $select ) or die ( "Sql error : " . mysql_error( ) ); $fields = mysql_num_fields ( $export ); for ( $i = 0; $i < $fields; $i++ ) { $header .= mysql_field_name( $export , $i ) . "\t"; } while( $row = mysql_fetch_row( $export ) ) { $line = ''; foreach( $row as $value ) { if ( ( !isset( $value ) ) || ( $value == "" ) ) { $value = "\t"; } else { $value = str_replace( '"' , '""' , $value ); $value = '"' . $value . '"' . "\t"; } $line .= $value; } $data .= trim( $line ) . "\n"; } $data = str_replace( "\r" , "" , $data ); if ( $data == "" ) { $data = "\n(0) Records Found!\n"; } header("Content-type: application/octet-stream"); header("Content-Disposition: attachment; filename=your_desired_name.xls"); header("Pragma: no-cache"); header("Expires: 0"); print "$header\n$data"; ?> The code above is used for generating an Excel spreadsheet from a MySQL database, but we are getting following error: The file you are trying to open, 'users.xls', is in a different format than specified by the file extension. Verify that the file is not corrupted and is from a trusted source before opening the file. Do you want to open the file now? What is the problem and how do we fix it?

    Read the article

  • How can I embed images in an ASP.NET Generated Word File

    - by Nikos Steiakakis
    Hi everyone! I have a quite common problem, as I saw in the various user groups but could not find a suitable answer. What I want to do is generate an ASP.NET page in my website which will have the option of being exported into Microsoft Word .doc format. The method I have used is this: Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=Test.doc"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/msword"; StringWriter sw = new StringWriter(); HtmlTextWriter htmlWrite = new HtmlTextWriter(sw); Page.RenderControl(htmlWrite); Response.Write(sw.ToString()); Response.End(); However this eventhough it generates a word doc, the images are note embedded in the document, rather they are placed as links. I have looked for a way to do this, but have not found something that actually worked. I would appreciate any help I can get, since this as "last minute" requirement (talk about typical) Thanks

    Read the article

  • How to display custom processing message in JQuery datatables

    - by Sukhi
    i am using datatables api to display data in my asp.net4.0 application; datatables I have one column [ Delete ] to delete the row data.when i click on this link i send a jquery ajax request to delete the row from database. I want to display a message such as [ Deleting record... ] to the end user until data deleted by server side processing. I put a div on my page and write a message [ Deleting record... ] in a div when i click on delete link i display that message but when delete operation complete it also display a message [ Processing... ](which is inbuilt message of datatables) which looks like odd as two message are displaying. What can i do better to display message to the end user. JSCode $('#tblVideoList .delete').live('click', function (e) { e.preventDefault(); var oTable = $('#tblVideoList').dataTable(); var aPos = oTable.fnGetPosition(this.parentNode); var aData = oTable.fnGetData(aPos[0]); if (confirm('Are you sure want to delete the record.')) { $("#divDelete").show(); var today = new Date(); $.ajax({ type: "GET", cache: false, url: "samplepage.aspx", success: function (msg) { $("#divDelete").hide(); oTable.fnDraw(); } }); } return false; }); Thanks

    Read the article

  • Divide a path into N sections using Java or PostgreSQL/PostGIS

    - by Guido
    Imagine a GPS tracking system that is following the position of several objects. The points are stored in a database (PostgreSQL + PostGIS). Each path is composed by a different number of points. That is the reason why, in order to compare a pair of paths, I need to divide every path in a set of 100 points. Do you know any PostGIS function that already implement this algorithm? I've not been able to find it. If not, I'd like to solve it using Java. In this case I'd like to know an efficient and easy to implement algorithm to divide a path into N points. The most simple example could be to divide this path into three points: position 1 : x=1, y=2 position 2 : x=1, y=3 And the result should be: position 1 : x=1, y=2 (starting point) position 2 : x=5, y=2.5 position 3 : x=9, y=3 (end point) Edit: By 'compare a pair of paths' I mean to calculate the distance between two paths. I plan to divide each path in 100 points, and sum the euclidean distance between each one of these points as the distance between the two paths.

    Read the article

  • Getting identity from Ado.Net Update command

    - by rboarman
    My scenario is simple. I am trying to persist a DataSet and have the identity column filled in so I can add child records. Here's what I've got so far: using (SqlConnection connection = new SqlConnection(connStr)) { SqlDataAdapter adapter = new SqlDataAdapter("select * from assets where 0 = 1", connection); adapter.MissingMappingAction = MissingMappingAction.Passthrough; adapter.MissingSchemaAction = MissingSchemaAction.AddWithKey; SqlCommandBuilder cb = new SqlCommandBuilder(adapter); var insertCmd = cb.GetInsertCommand(true); insertCmd.Connection = connection; connection.Open(); adapter.InsertCommand = insertCmd; adapter.InsertCommand.CommandText += "; set ? = SCOPE_IDENTITY()"; adapter.InsertCommand.UpdatedRowSource = UpdateRowSource.OutputParameters; var param = new SqlParameter("RowId", SqlDbType.Int); param.SourceColumn = "RowId"; param.Direction = ParameterDirection.Output; adapter.InsertCommand.Parameters.Add(param); SqlTransaction transaction = connection.BeginTransaction(); insertCmd.Transaction = transaction; try { assetsImported = adapter.Update(dataSet.Tables["Assets"]); transaction.Commit(); } catch (Exception ex) { transaction.Rollback(); // Log an error } connection.Close(); } The first thing that I noticed, besides the fact that the identity value is not making its way back into the DataSet, is that my change to add the scope_identity select statement to the insert command is not being executed. Looking at the query using Profiler, I do not see my addition to the insert command. Questions: 1) Why is my addition to the insert command not making its way to the sql being executed on the database? 2) Is there a simpler way to have my DataSet refreshed with the identity values of the inserted rows? 3) Should I use the OnRowUpdated callback to add my child records? My plan was to loop through the rows after the Update() call and add children as needed. Thank you in advance. Rick

    Read the article

  • Use Lambda in Attribute constructor to get method's parameters

    - by Anthony Shaw
    I'm not even sure if this is possible, but I've exhausted all of my ideas trying to figure this out, so I figured I'd send this out to the community and see what you thought. And, if it's not possible, maybe you'd have some ideas as well. I'm trying to make an Attribute class that I can add to a method that would allow me to use a lambda expression to get each parameter of the method public ExampleAttribute : Attribute { public object Value { get; set; } public ExampleAttribute(--something here to make the lambda work--, object value) { Value = value; } } I'd like to be able to something like this below: [Example(x=x.Id, 4)] [Example(x=x.filter, "string value")] public ActionResult Index(int Id, string filter) { return View(); } I understand that I might be completely dreaming with this idea. I'm basically trying to write a model to allow for self-documenting REST API Documentation. In a recent project here at work we wrote a dozen or so services with 5 to 15 methods on each, I figure it's easier to write something to do this, than to hand code a documentation page for each. I would plan to eventually release this as an open-source project once I have it in a place that I feel it's releasable.

    Read the article

  • ECM (Niche Vs Mass Market)

    - by Luj Reyes
    Hi Everyone, I recently started a little company with a couple of guys. Ours is the typical startup, a lot of ideas, dreams, talent and work hours :P. Our initial business plan was to develop a DM (Document Manager) with several features found on DropBox and other tools but with a big differentiator. Then we got in the team this Business Guy (I must say that several of us could be called 'Business Guys' but we are mainly hackers, he is just Another 'Networking Guy'), and along with him came this market analysis for a DM aimed at a very specific and narrow niche. We have many elements to believe in his market study and the idea is the classic "The market is X million, so if we grab a 10%...", and the market is really there to grab because all big providers deemed it too little and fled, let's say that the market is 5 million USD and demand very specific features. If we decide to go for this niche product we face a sales cycle of about 7 months, and the main goal of these revenue is to develop more ambitious projects. (Institutional VC is out of the question if you want to keep a marginal ownership of your company in my country). The only overlap between the niche and the mass market product features is the ability to store documents; everything else requires that we focus all of our efforts towards one or the other. I've studied a lot about the differences between Mass and Niche Markets, but I want to hear from people with actual experience. So everything comes down to this: If you have a really “saleable” idea what is the right thing to do: to go for the niche or go for the big prize and target primarily the mass market? Thanks for your input

    Read the article

  • What does flushing thread local memory to global memory mean?

    - by Jack Griffith
    Hi, I am aware that the purpose of volatile variables in Java is that writes to such variables are immediately visible to other threads. I am also aware that one of the effects of a synchronized block is to flush thread-local memory to global memory. I have never fully understood the references to 'thread-local' memory in this context. I understand that data which only exists on the stack is thread-local, but when talking about objects on the heap my understanding becomes hazy. I was hoping that to get comments on the following points: When executing on a machine with multiple processors, does flushing thread-local memory simply refer to the flushing of the CPU cache into RAM? When executing on a uniprocessor machine, does this mean anything at all? If it is possible for the heap to have the same variable at two different memory locations (each accessed by a different thread), under what circumstances would this arise? What implications does this have to garbage collection? How aggressively do VMs do this kind of thing? Overall, I think am trying to understand whether thread-local means memory that is physically accessible by only one CPU or if there is logical thread-local heap partitioning done by the VM? Any links to presentations or documentation would be immensely helpful. I have spent time researching this, and although I have found lots of nice literature, I haven't been able to satisfy my curiosity regarding the different situations & definitions of thread-local memory. Thanks very much.

    Read the article

  • Safe way to set computed environment variables

    - by sfink
    I have a bash script that I am modifying to accept key=value pairs from stdin. (It is spawned by xinetd.) How can I safely convert those key=value pairs into environment variables for subprocesses? I plan to only allow keys that begin with a predefined prefix "CMK_", to avoid IFS or any other "dangerous" variable getting set. But the simplistic approach function import () { local IFS="=" while read key val; do case "$key" in CMK_*) eval "$key=$val";; esac done } is horribly insecure because $val could contain all sorts of nasty stuff. This seems like it would work: shopt -s extglob function import () { NORMAL_IFS="$IFS" local IFS="=" while read key val; do case "$key" in CMK_*([a-zA-Z_]) ) IFS="$NORMAL_IFS" eval $key='$val' IFS="=" ;; esac done } but (1) it uses the funky extglob thing that I've never used before, and (2) it's complicated enough that I can't be comfortable that it's secure. My goal, to be specific, is to allow key=value settings to pass through the bash script into the environment of called processes. It is up to the subprocesses to deal with potentially hostile values getting set. I am modifying someone else's script, so I don't want to just convert it to Perl and be done with it. I would also rather not change it around to invoke the subprocesses differently, something like #!/bin/sh ...start of script... perl -nle '($k,$v)=split(/=/,$_,2); $ENV{$k}=$v if $k =~ /^CMK_/; END { exec("subprocess") }' ...end of script...

    Read the article

  • How to distinguish between two different UDP clients on the same IP address?

    - by Ricket
    I'm writing a UDP server, which is a first for me; I've only done a bit of TCP communications. And I'm having trouble figuring out exactly how to distinguish which user is which, since UDP deals only with packets rather than connections and I therefore cannot tell exactly who I'm communicating with. Here is pseudocode of my current server loop: DatagramPacket p; socket.receive(p); // now p contains the user's IP and port, and the data int key = getKey(p); if(key == 0) { // connection request key = makeKey(p); clients.add(key, p.ip); send(p.ip, p.port, key); // give the user his key } else { // user has a key // verify key belongs to that IP address // lookup the user's session data based on the key // react to the packet in the context of the session } When designing this, I kept in mind these points: Multiple users may exist on the same IP address, due to the presence of routers, therefore users must have a separate identification key. Packets can be spoofed, so the key should be checked against its original IP address and ignored if a different IP tries to use the key. The outbound port on the client side might change among packets. Is that third assumption correct, or can I simply assume that one user = one IP+port combination? Is this commonly done, or should I continue to create a special key like I am currently doing? I'm not completely clear on how TCP negotiates a connection so if you think I should model it off of TCP then please link me to a good tutorial or something on TCP's SYN/SYNACK/ACK mess. Also note, I do have a provision to resend a key, if an IP sends a 0 and that IP already has a pending key; I omitted it to keep the snippet simple. I understand that UDP is not guaranteed to arrive, and I plan to add reliability to the main packet handling code later as well.

    Read the article

  • 1k of Program Space, 64 bytes of RAM. Is assembly an absolute must?

    - by Earlz
    (If your lazy see bottom for TL;DR) Hello, I am planning to build a new (prototype) project dealing with physical computing. Basically, I have wires. These wires all need to have their voltage read at the same time. More than a few hundred microseconds difference between the readings of each wire will completely screw it up. The Arduino takes about 114 microseconds. So the most I could read is 2 or 3 wires before the latency would skew the accuracy of the readings. So my plan is to have an Arduino as the "master" of an array of ATTinys. The arduino is pretty cramped for space, but it's a massive playground compared to the tinys. An ATTiny13A has 1k of flash ROM(program space), 64 bytes of RAM, and 64 bytes of (not-durable and slow) EEPROM. (I'm choosing this for price as well as size) The ATTinys in my system will not do much. Basically, all they will do is wait for a signal from the Master, and then read the voltage of 1 or 2 wires and store it in RAM(or possibly EEPROM if it's that cramped). And then send it to the Master using only 1 wire for data.(no room for more than that!). So far then, all I should have to do is implement trivial voltage reading code (using built in ADC). But this communication bit I'm worried about. Do you think a communication protocol(using just 1 wire!) could even be implemented in such constraints? TL;DR: In less than 1k of program space and 64 bytes of RAM(and 64 bytes of EEPROM) do you think it is possible to implement a 1 wire communication protocol? Would I need to drop to assembly to make it fit? I know that currently my Arduino programs linking to the Wiring library are over 8k, so I'm a bit concerned.

    Read the article

< Previous Page | 347 348 349 350 351 352 353 354 355 356 357 358  | Next Page >