Search Results

Search found 6394 results on 256 pages for 'regular expressions'.

Page 245/256 | < Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >

  • How do you remind your Scrum Product Owner about his promises/actions?

    - by Felix Ogg
    ** EDIT: Rephrased the question to re-focus ** Our Scrum team meets as seldomly as possible, but we meet with the product owner every chance we get. We track everyone's agreed action points (particularly theirs). We are 100% agile, but our product owner lives in traditional world, we remain off-site. We facilitate him in crossing over to our fast-paced world. There's not much wrong. The team and the PO are in good spirits. PO is present at every meeting and positively energized. Just imagine this person as a 70 year old, slow grandpa, who is forgetful, yet kind. In reality he isn't, but he is used to a working environment (public servants) that is much slooooower. Manyana-manyana etc. It is frustrating for my team to cooperate: PO lives in a non-prioritized environment, and everyone in it has learned the productivity-technique of NGTD (Not Getting Things Done). He WANTS to, it's just that he forgets or 'sinks' somewhere along the away. We have experimented with a text file, maintained by the Scrum master (low-tech), which he broadcasts by e-mail every day JIRA, our issue tracker. Turns out this is nice for programmers, but too steep for 'regular people' I Googled for Issue tracking webtools but came up empty handed: All tools are aimed at IT issue tracking, instead of meeting action point tracking/planning for mere mortals. I did find TODO-lists like RememberTheMilk, but they don't track comments, and - to be honest - I doubt we could get our product owner to use it (too complicated). We have three requirements: Register action points, assign to a team member and a deadline Offer anyone to 'comment' on progress of any action point Do not build our own tool from scratch We do not need: - impressive authorization models, - multi-project, - workflow, - crosslinking. Is there any trick/tool you use to assist your product owner 'fly' like the rest of the rest of the team? Communication before tools I agree with the general consensus that one should not try to apply technology to a communication problem, however in this case I am merely looking for a tool to save me time in setting up prioritized lists. I found www.thymer.com today, may be what I am looking for. The guys are cool. It is getting rather feature-bloated though.

    Read the article

  • compressed archive with quick access to individual file

    - by eric.frederich
    I need to come up with a file format for new application I am writing. This file will need to hold a bunch other text files which are mostly text but can be other formats as well. Naturally, a compressed tar file seems to fit the bill. The problem is that I want to be able to retrieve some data from the file very quickly and getting just a particular file from a tar.gz file seems to take longer than it should. I am assumeing that this is because it has to decompress the entire file even though I just want one. When I have just a regular uncompressed tar file I can get that data real quick. Lets say the file I need quickly is called data.dat For example the command... tar -x data.dat -zf myfile.tar.gz ... is what takes a lot longer than I'd like. MP3 files have id3 data and jpeg files have exif data that can be read in quickly without opening the entire file. I would like my data.dat file to be available in a similar way. I was thinking that I could leave it uncompressed and seperate from the rest of the files in myfile.tar.gz I could then create a tar file of data.dat and myfile.tar.gz and then hopefully that data would be able to be retrieved faster because it is at the head of outer tar file and is uncompressed. Does this sound right?... putting a compressed tar inside of a tar file? Basically, my need is to have an archive type of file with quick access to one particular file. Tar does this just fine, but I'd also like to have that data compressed and as soon as I do that, I no longer have quick access. Are there other archive formats that will give me that quick access I need? As a side note, this application will be written in Python. If the solution calls for a re-invention of the wheel with my own binary format I am familiar with C and would have no problem writing the Python module in C. Idealy I'd just use tar, dd, cat, gzip, etc though. Thanks, ~Eric

    Read the article

  • Stuck at being unable to print a substring no more than 4679 characters

    - by Newcoder
    I have a program that does string manipulation on very large strings (around 100K). The first step in my program is to cleanup the input string so that it only contains certain characters. Here is my method for this cleanup: public static String analyzeString (String input) { String output = null; output = input.replaceAll("[-+.^:,]",""); output = output.replaceAll("(\\r|\\n)", ""); output = output.toUpperCase(); output = output.replaceAll("[^XYZ]", ""); return output; } When i print my 'input' string of length 97498, it prints successfully. My output string after cleanup is of length 94788. I can print the size using output.length() but when I try to print this in Eclipse, output is empty and i can see in eclipse output console header. Since this is not my final program, so I ignored this and proceeded to next method that does pattern matching on this 'cleaned-up' string. Here is code for pattern matching: public static List<Integer> getIntervals(String input, String regex) { List<Integer> output = new ArrayList<Integer> (); // Do pattern matching Pattern p1 = Pattern.compile(regex); Matcher m1 = p1.matcher(input); // If match found while (m1.find()) { output.add(m1.start()); output.add(m1.end()); } return output; } Based on this program, i identify the start and end intervals of my pattern match as 12351 and 87314. I tried to print this match as output.substring(12351, 87314) and only get blank output. Numerous hit and trial runs resulted in the conclusion that biggest substring that i can print is of length 4679. If i try 4680, i again get blank input. My confusion is that if i was able to print original string (97498) length, why i couldnt print the cleaned-up string (length 94788) or the substring (length 4679). Is it due to regular expression implementation which may be causing some memory issues and my system is not able to handle that? I have 4GB installed memory.

    Read the article

  • Rails send mail with GMail

    - by Danny McClelland
    Hi Everyone, I am on rails 2.3.5 and have the latest Ruby installed and my application is running well, except, GMail emails. I am trying to setup my gmail imap connection which has worked previously but now doesnt want to know. This is my code: # Be sure to restart your server when you modify this file # Uncomment below to force Rails into production mode when # you don't control web/app server and can't set it the proper way # ENV['RAILS_ENV'] ||= 'production' # Specifies gem version of Rails to use when vendor/rails is not present RAILS_GEM_VERSION = '2.3.5' unless defined? RAILS_GEM_VERSION # Bootstrap the Rails environment, frameworks, and default configuration require File.join(File.dirname(__FILE__), 'boot') Rails::Initializer.run do |config| # Gems config.gem "capistrano-ext", :lib => "capistrano" config.gem "configatron" # Make Time.zone default to the specified zone, and make Active Record store time values # in the database in UTC, and return them converted to the specified local zone. config.time_zone = "London" # The internationalization framework can be changed to have another default locale (standard is :en) or more load paths. # All files from config/locales/*.rb,yml are added automatically. # config.i18n.load_path << Dir[File.join(RAILS_ROOT, 'my', 'locales', '*.{rb,yml}')] #config.i18n.default_locale = :de # Your secret key for verifying cookie session data integrity. # If you change this key, all old sessions will become invalid! # Make sure the secret is at least 30 characters and all random, # no regular words or you'll be exposed to dictionary attacks. config.action_controller.session = { :session_key => '_base_session', :secret => '7389ea9180b15f1495a5e73a69a893311f859ccff1ffd0fa2d7ea25fdf1fa324f280e6ba06e3e5ba612e71298d8fbe7f15fd7da2929c45a9c87fe226d2f77347' } config.active_record.observers = :user_observer end ActiveSupport::CoreExtensions::Date::Conversions::DATE_FORMATS.merge!(:default => '%d/%m/%Y') ActiveSupport::CoreExtensions::Time::Conversions::DATE_FORMATS.merge!(:default => '%d/%m/%Y') require "will_paginate" ActionMailer::Base.delivery_method = :smtp ActionMailer::Base.smtp_settings = { :enable_starttls_auto => true, :address => "smtp.gmail.com", :port => 587, :domain => "XXXXXXXX.XXX", :authentication => :plain, :user_name => "XXXXXXXXXX.XXXXXXXXXX.XXX", :password => "XXXXX" } But the above just results in an SMTP auth error in the production log. I have read varied reports of this not working in Rails 2.2.2 but nothing for 2.3.5, anyone got any ideas? Thanks, Danny

    Read the article

  • Loading the last related record instantly for multiple parent records using Entity framework

    - by Guillaume Schuermans
    Does anyone know a good approach using Entity Framework for the problem described below? I am trying for our next release to come up with a performant way to show the placed orders for the logged on customer. Of course paging is always a good technique to use when a lot of data is available I would like to see an answer without any paging techniques. Here's the story: a customer places an order which gets an orderstatus = PENDING. Depending on some strategy we move that order up the chain in order to get it APPROVED. Every change of status is logged so we can see a trace for statusses and maybe even an extra line of comment per status which can provide some extra valuable information to whoever sees this order in an interface. So an Order is linked to a Customer. One order can have multiple orderstatusses stored in OrderStatusHistory. In my testscenario I am using a customer which has 100+ Orders each with about 5 records in the OrderStatusHistory-table. I would for now like to see all orders in one page not using paging where for each Order I show the last relevant Status and the extra comment (if there is any for this last status; both fields coming from OrderStatusHistory; the record with the highest Id for the given OrderId). There are multiple scenarios I have tried, but I would like to see any potential other solutions or comments on the things I have already tried. Trying to do Include() when getting Orders but this still results in multiple queries launched on the database. Each order triggers an extra query to the database to get all orderstatusses in the history table. So all statusses are queried here instead of just returning the last relevant one, plus 100 extra queries are launched for 100 orders. You can imagine the problem when there are 100000+ orders in the database. Having 2 computed columns on the database: LastStatus, LastStatusInformation and a regular Linq-Query which gets those columns which are available through the Entity-model. The problem with this approach is the fact that those computed columns are determined using a scalar function which can not be changed without removing the formula from the computed column, etc... In the end I am very familiar with SQL and Stored procedures, but since the rest of the data-layer uses Entity Framework I would like to stick to it as long as possible, even though I have my doubts about performance. Using the SQL approach I would write something like this: WITH cte (RN, OrderId, [Status], Information) AS ( SELECT ROW_NUMBER() OVER (PARTITION BY OrderId ORDER BY Id DESC), OrderId, [Status], Information FROM OrderStatus ) SELECT o.Id, cte.[Status], cte.Information AS StatusInformation, o.* FROM [Order] o INNER JOIN cte ON o.Id = cte.OrderId AND cte.RN = 1 WHERE CustomerId = @CustomerId ORDER BY 1 DESC; which returns all orders for the customer with the statusinformation provided by the Common Table Expression. Does anyone know a good approach using Entity Framework?

    Read the article

  • Should I create a unique clustered index, or non-unique clustered index on this SQL 2005 table?

    - by Bremer
    I have a table storing millions of rows. It looks something like this: Table_Docs ID, Bigint (Identity col) OutputFileID, int Sequence, int …(many other fields) We find ourselves in a situation where the developer who designed it made the OutputFileID the clustered index. It is not unique. There can be thousands of records with this ID. It has no benefit to any processes using this table, so we plan to remove it. The question, is what to change it to… I have two candidates, the ID identity column is a natural choice. However, we have a process which does a lot of update commands on this table, and it uses the Sequence to do so. The Sequence is non-unique. Most records only contain one, but about 20% can have two or more records with the same Sequence. The INSERT app is a VB6 piece of crud throwing thousands insert commands at the table. The Inserted values are never in any particular order. So the Sequence of one insert may be 12345, and the next could be 12245. I know that this could cause SQL to move a lot of data to keep the clustered index in order. However, the Sequence of the inserts are generally close to being in order. All inserts would take place at the end of the clustered table. Eg: I have 5 million records with Sequence spanning 1 to 5 million. The INSERT app will be inserting sequence’s at the end of that range at any given time. Reordering of the data should be minimal (tens of thousands of records at most). Now, the UPDATE app is our .NET star. It does all UPDATES on the Sequence column. “Update Table_Docs Set Feild1=This, Field2=That…WHERE Sequence =12345” – hundreds of thousands of these a day. The UPDATES are completely and totally, random, touching all points of the table. All other processes are simply doing SELECT’s on this (Web pages). Regular indexes cover those. So my question is, what’s better….a unique clustered index on the ID column, benefiting the INSERT app, or a non-unique clustered index on the Sequence, benefiting the UPDATE app?

    Read the article

  • Id property not populated

    - by fingers
    I have an identity mapping like so: Id(x => x.GuidId).Column("GuidId") .GeneratedBy.GuidComb().UnsavedValue(Guid.Empty); When I retrieve an object from the database, the GuidId property of my object is Guid.Empty, not the actual Guid (the property in the class is of type System.Guid). However, all of the other properties in the object are populated just fine. The database field's data type (SQL Server 2005) is uniqueidentifier, and marked as RowGuid. The application that is connecting to the database is a VB.NET Web Site project (not a "Web Application" or "MVC Web Application" - just a regular "Web Site" project). I open the NHibernate session through a custom HttpModule. Here is the HttpModule: public class NHibernateModule : System.Web.IHttpModule { public static ISessionFactory SessionFactory; public static ISession Session; private static FluentConfiguration Configuration; static NHibernateModule() { if (Configuration == null) { string connectionString = cfg.ConfigurationManager.ConnectionStrings["myDatabase"].ConnectionString; Configuration = Fluently.Configure() .Database(MsSqlConfiguration.MsSql2005.ConnectionString(cs => cs.Is(connectionString))) .ExposeConfiguration(c => c.Properties.Add("current_session_context_class", "web")) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<LeadMap>().ExportTo("C:\\Mappings")); } SessionFactory = Configuration.BuildSessionFactory(); } public void Init(HttpApplication context) { context.BeginRequest += delegate { Session = SessionFactory.OpenSession(); CurrentSessionContext.Bind(Session); }; context.EndRequest += delegate { CurrentSessionContext.Unbind(SessionFactory); }; } public void Dispose() { Session.Dispose(); } } The strangest part of all, is that from my unit test project, the GuidId property is returned as I would expect. I even rigged it to go for the exact row in the exact database as the web site was hitting. The only differences I can think of between the two projects are The unit test project is in C# Something with the way the session is managed between the HttpModule and my unit tests The configuration for the unit tests is as follows: Fluently.Configure() .Database(MsSqlConfiguration.MsSql2005.ConnectionString(cs => cs.Is(connectionString))) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<LeadDetailMap>()); I am fresh out of ideas. Any help would be greatly appreciated. Thanks

    Read the article

  • MySQL Split Time Ranges into Smaller Chunks

    - by Neren
    Hello all, I've recently been tasked with finishing a PHP/MySQL web app when the developer quit last week. I'm no MySQL expert, so I apologize if this is an intensely simple question. I've searched SO for the better part of two days trying to find a relatively easy solution to my problem, which is as follows. Problem in a Nutshell: I have a MySQL table full of start and end datetime (GMT -5) & UNIX Timestamp values covering durations of irregular length and need to break/split/divide them into more-regular time chunks (5 minutes). I'm not after a count of row entries per time chunk/bucket/period, if that makes any sense. Data Example: started, ended, started_UNIX, ended_UNIX 2010-10-25 15:12:33, 2010-10-25 15:47:09, 1288033953, 1288036029 What I'm hoping to get: 2010-10-25 15:12:33, 2010-10-25 15:15:00, 1288033953, 1288037700 2010-10-25 15:15:00, 2010-10-25 15:20:00, 1288037700, 1288038000 2010-10-25 15:20:00, 2010-10-25 15:25:00, 1288038000, 1288038300 2010-10-25 15:25:00, 2010-10-25 15:30:00, 1288038300, 1288038600 2010-10-25 15:30:00, 2010-10-25 15:35:00, 1288038600, 1288038900 2010-10-25 15:35:00, 2010-10-25 15:40:00, 1288038900, 1288039200 2010-10-25 15:40:00, 2010-10-25 15:45:00, 1288039200, 1288039500 2010-10-25 15:45:00, 2010-10-25 15:47:09, 1288039500, 1288039629 If you're interested, here's the quick & dirty on the app and why I need the data: App overview: The application receives very simple POST requests generated by a basic sensor device when its input pins go to ground, which submits an INSERT query to the database where MySQL records a timestamp (as started). When the input pins return from a grounded state, the device submits a different POST request, which causes the PHP app to submit an UPDATE query, where a modification time timestamp is inserted (as ended). My employer recently changed the periodic reporting unit of measure from Seconds "On" Per Day to Seconds "On" Per 5 Minute Interval. I had formulated what I thought would be a workable solution, but when I looked at it on paper, it looked like Rube Goldberg's nightmare constructed in MySQL, so that was out. Any suggestions as to how to break these spans into 5 minute blocks? Keeping it all in MySQL would be my preference, though I'll take any suggestions. Thank you for any suggestions you may have. Again, I apologize if this is a no-brainer. If I ask any additional questions of the SO collective consciousness in the future, I'll try to word them a bit better. Any help will be happily welcomed. Thanks, Neren

    Read the article

  • ANSI C as core of a C# project? Is this possible?

    - by Nektarios
    I'm writing a NON-GUI app which I want to be cross platform between OS X and Windows. I'm looking at the following architecture, but I don't know if it will work on the windows side: (Platform specific entry point) - ANSI C main loop = ANSI C model code doing data processing / logic = (Platform specific helpers) So the core stuff I'm planning to write in regular ANSI C, because A) it should be platform independent, B) I'm extremely comfortable with C, C) It can do the job and do it well (Platform specific entry point) can be written in whatever necessary to get the job done, this is a small amount of code, doesn't matter to me. (Platform specific helpers) is the sticky thing. This is stuff like parsing XML, accessing databases, graphics toolkit stuff, whatever. Things that aren't easy in C. Things that modern languages/frameworks will give for free. On OS X this code will be written in Objective-C interfacing with Cocoa. On Windows I'm thinking my best bet is to use C# So on Windows my architecture (simplified) looks like (C# or C?) - ANSI C - C# Is this possible? Some thoughts/suggestions so far.. 1) Compile my C core as a .dll -- this is fine, but seems there's no way to call my C# helpers unless I can somehow get function pointers and pass them to my core, but that seems unlikely 2) Compile a C .exe and a C# .exe and have them talk via shared memory or some kind of IPC. I'm not entirely opposed to this but it obviously introduces a lot of complexity so it doesn't seem ideal 3) Instead of C# use C++, it gets me some nice data management stuff and nice helper code. And I can mix it pretty easily. And the work I do could probably easily port to Linux. But I really don't like C++, and I don't want this to turn in to a 3rd-party-library-fest. Not that it's a huge deal, but it's 2010.. anything for basic data management should be built in. And targetting Linux is really not a priority. Note that no "total" alternatives are OK as suggested in other similar questions on SO I've seen; java, RealBasic, mono.. this is an extremely performance intensive application doing soft realtime for game/simulation purposes, I need C & friends here to do it right (maybe you don't, but I do)

    Read the article

  • How to implement a caching model without violating MVC pattern?

    - by RPM1984
    Hi Guys, I have an ASP.NET MVC 3 (Razor) Web Application, with a particular page which is highly database intensive, and user experience is of the upmost priority. Thus, i am introducing caching on this particular page. I'm trying to figure out a way to implement this caching pattern whilst keeping my controller thin, like it currently is without caching: public PartialViewResult GetLocationStuff(SearchPreferences searchPreferences) { var results = _locationService.FindStuffByCriteria(searchPreferences); return PartialView("SearchResults", results); } As you can see, the controller is very thin, as it should be. It doesn't care about how/where it is getting it's info from - that is the job of the service. A couple of notes on the flow of control: Controllers get DI'ed a particular Service, depending on it's area. In this example, this controller get's a LocationService Services call through to an IQueryable<T> Repository and materialize results into T or ICollection<T>. How i want to implement caching: I can't use Output Caching - for a few reasons. First of all, this action method is invoked from the client-side (jQuery/AJAX), via [HttpPost], which according to HTTP standards should not be cached as a request. Secondly, i don't want to cache purely based on the HTTP request arguments - the cache logic is a lot more complicated than that - there is actually two-level caching going on. As i hint to above, i need to use regular data-caching, e.g Cache["somekey"] = someObj;. I don't want to implement a generic caching mechanism where all calls via the service go through the cache first - i only want caching on this particular action method. First thought's would tell me to create another service (which inherits LocationService), and provide the caching workflow there (check cache first, if not there call db, add to cache, return result). That has two problems: The services are basic Class Libraries - no references to anything extra. I would need to add a reference to System.Web here. I would have to access the HTTP Context outside of the web application, which is considered bad practice, not only for testability, but in general - right? I also thought about using the Models folder in the Web Application (which i currently use only for ViewModels), but having a cache service in a models folder just doesn't sound right. So - any ideas? Is there a MVC-specific thing (like Action Filter's, for example) i can use here? General advice/tips would be greatly appreciated.

    Read the article

  • How to handle very frequent updates to a Lucene index

    - by fsm
    I am trying to prototype an indexing/search application which uses very volatile indexing data sources (forums, social networks etc), here are some of the performance requirements, Very fast turn-around time (by this I mean that any new data (such as a new message on a forum) should be available in the search results very soon (less than a minute)) I need to discard old documents on a fairly regular basis to ensure that the search results are not dated. Last but not least, the search application needs to be responsive. (latency on the order of 100 milliseconds, and should support at least 10 qps) All of the requirements I have currently can be met w/o using Lucene (and that would let me satisfy all 1,2 and 3), but I am anticipating other requirements in the future (like search relevance etc) which Lucene makes easier to implement. However, since Lucene is designed for use cases far more complex than the one I'm currently working on, I'm having a hard time satisfying my performance requirements. Here are some questions, a. I read that the optimize() method in the IndexWriter class is expensive, and should not be used by applications that do frequent updates, what are the alternatives? b. In order to do incremental updates, I need to keep committing new data, and also keep refreshing the index reader to make sure it has the new data available. These are going to affect 1 and 3 above. Should I try duplicate indices? What are some common approaches to solving this problem? c. I know that Lucene provides a delete method, which lets you delete all documents that match a certain query, in my case, I need to delete all documents which are older than a certain age, now one option is to add a date field to every document and use that to delete documents later. Is it possible to do range queries on document ids (I can create my own id field since I think that the one created by lucene keeps changing) to delete documents? Is it any faster than comparing dates represented as strings? I know these are very open questions, so I am not looking for a detailed answer, I will try to treat all of your answers as suggestions and use them to inform my design. Thanks! Please let me know if you need any other information.

    Read the article

  • Can I use the [] operator in C++ to create virtual arrays

    - by Shane MacLaughlin
    I have a large code base, originally C ported to C++ many years ago, that is operating on a number of large arrays of spatial data. These arrays contain structs representing point and triangle entities that represent surface models. I need to refactor the code such that the specific way these entities are stored internally varies for specific scenarios. For example if the points lie on a regular flat grid, I don't need to store the X and Y coordinates, as they can be calculated on the fly, as can the triangles. Similarly, I want to take advantage of out of core tools such as STXXL for storage. The simplest way of doing this is replacing array access with put and get type functions, e.g. point[i].x = XV; becomes Point p = GetPoint(i); p.x = XV; PutPoint(i,p); As you can imagine, this is a very tedious refactor on a large code base, prone to all sorts of errors en route. What I'd like to do is write a class that mimics the array by overloading the [] operator. As the arrays already live on the heap, and move around with reallocs, the code already assumes that references into the array such as point *p = point + i; may not be used. Is this class feasible to write? For example writing the methods below in terms of the [] operator; void MyClass::PutPoint(int Index, Point p) { if (m_StorageStrategy == RegularGrid) { int xoffs,yoffs; ComputeGridFromIndex(Index,xoffs,yoffs); StoreGridPoint(xoffs,yoffs,p.z); } else m_PointArray[Index] = p; } } Point MyClass::GetPoint(int Index) { if (m_StorageStrategy == RegularGrid) { int xoffs,yoffs; ComputeGridFromIndex(Index,xoffs,yoffs); return GetGridPoint(xoffs,yoffs); // GetGridPoint returns Point } else return m_PointArray[Index]; } } My concern is that all the array classes I've seen tend to pass by reference, whereas I think I'll have to pass structs by value. I think it should work put other than performance, can anyone see any major pitfalls with this approach. n.b. the reason I have to pass by value is to get point[a].z = point[b].z + point[c].z to work correctly where the underlying storage type varies.

    Read the article

  • Why this search can not generate correct result?

    - by user482742
    Hi, All: Below is to find same customer and if he is in list, the number add one. If he is not in the list, just add him in the list. I use Search function to do this, but failed and generated incorrect records. It can not find the customer or the right number of customers. But if I use For..loop to iterate the list, it does well and can find the customer and add new customer in that for..loop search procedure. (I did not paste for ..loop search procedrue here). Another problem is that there is no difference between setting list.sorted true and false. It seems Search function is not correct. This search function is from an example of delphi textbook. The below is with Delphi 7. Thank you. Procedure Form1.create; begin list:=Tstringlist.create; list.sorted:=true; // Search function will generate exactly Same and Incorrect //records no matter list.sorted is set true or false. list.duplicates:=dupignore; .. end; Procedure addcustomer; var .. begin while p1.MatchAgain do begin //p1 is regular expression customer:=p1.MatchedExpression; if (search(customer)=false) then begin list.Add(customer+'=1'); end; allcustomer:=allcustomer+1; .. end; Function Tform1.search(customer: string): boolean; var fre:string; num:integer; L:integer; R:integer; M: Integer; CompareResult: Integer; found: boolean; begin result:=false; found:=false; L := 0; R := List.Count - 1; while (L <= R) and ( not found ) do begin M := (L + R) div 2; CompareResult := Comparetext(list.Names[m]), customer); if (compareresult=0) then begin fre:=list.ValueFromIndex [m]; num:=strtoint(fre); num:=num+1; list.ValueFromIndex[m]:=inttostr(num); Found := True; Result := true; exit; end else if compareresult > 0 then r := m - 1 else l := m + 1; end; end;

    Read the article

  • Distinguishing between failure and end of file in read loop

    - by celtschk
    The idiomatic loop to read from an istream is while (thestream >> value) { // do something with value } Now this loop has one problem: It will not distinguish if the loop terminated due to end of file, or due to an error. For example, take the following test program: #include <iostream> #include <sstream> void readbools(std::istream& is) { bool b; while (is >> b) { std::cout << (b ? "T" : "F"); } std::cout << " - " << is.good() << is.eof() << is.fail() << is.bad() << "\n"; } void testread(std::string s) { std::istringstream is(s); is >> std::boolalpha; readbools(is); } int main() { testread("true false"); testread("true false tr"); } The first call to testread contains two valid bools, and therefore is not an error. The second call ends with a third, incomplete bool, and therefore is an error. Nevertheless, the behaviour of both is the same. In the first case, reading the boolean value fails because there is none, while in the second case it fails because it is incomplete, and in both cases EOF is hit. Indeed, the program above outputs twice the same line: TF - 0110 TF - 0110 To solve this problem, I thought of the following solution: while (thestream >> std::ws && !thestream.eof() && thestream >> value) { // do something with value } The idea is to detect regular EOF before actually trying to extract the value. Because there might be whitespace at the end of the file (which would not be an error, but cause read of the last item to not hit EOF), I first discard any whitespace (which cannot fail) and then test for EOF. Only if I'm not at the end of file, I try to read the value. For my example program, it indeed seems to work, and I get TF - 0100 TF - 0110 So in the first case (correct input), fail() returns false. Now my question: Is this solution guaranteed to work, or was I just (un-)lucky that it happened to give the desired result? Also: Is there a simpler (or, if my solution is wrong, a correct) way to get the desired result?

    Read the article

  • Preg_replace regex, newlines, connection resets

    - by bob_the_destroyer
    I have mixed html, custom code, and regular text I need to examine and change frequently on several, long wiki pages. I'm working with a proprietary wiki-like application and have no control over how the application functions or validates user input. The layout of pages that users add must follow a very specific standard layout and always include very specific text in only certain places - a standard which frequently changes. If users add pages that are so far out of the standard, they will be deleted. The fact that all this is obviously a complete waste of time when alternative platforms to do exactly what's needed here exist is already understood. I've built a PHP based API to automate this post-validation and frequent restandardization process for me. I've been able set up regex patterns to handle all this mixed text, and they all work fine for handling single lines. The problem I have is this: Poorly formed regex against long text with line breaks can lead to unexpected results, such as connection resets. I have no access to server-side logs to troubleshoot. How do I overcome this? This is just one example of what I currently have: {column} and {section} tags I'm searching for below can have any number of attributes, and wrap any text. {section} may or may not exist and may or may not be one or more lines under {column}, but it has to be wrapped inside {column}. {column} itself may or may not exist, and if it doesn't, I don't care. I want to grab the inner section contents and wrap it in an html div tag. I can't recall the exact pattern I'm using offhand at the moment, but it's close enough... $pattern = "/\{column:id=summary([|]?([a-zA-Z0-9-_ ]+[:][a-zA-Z0-9-_ ]+[ ]?))\}(.*)({section([|]([a-zA-Z0-9-_ ]+[:][a-zA-Z0-9-_ ]+[ ]?))\}(.*)\{section\}(.*))?{column\}/s"; $replacement = "{html}<div id='summary'$7</div{html}"; $text = preg_replace($pattern, $replacement, $subject); Handling the {column} and {section} attributes and passing only valid HTML parameters to the new html div or a subtext of it is itself a challenge, but my main focus above right now is getting that (.*) value within {section} above without causing a connection reset. Any pointers?

    Read the article

  • How should I store Dynamically Changing Data into Server Cache?

    - by Scott
    Hey all, EDIT: Purpose of this Website: Its called Utopiapimp.com. It is a third party utility for a game called utopia-game.com. The site currently has over 12k users to it an I run the site. The game is fully text based and will always remain that. Users copy and paste full pages of text from the game and paste the copied information into my site. I run a series of regular expressions against the pasted data and break it down. I then insert anywhere from 5 values to over 30 values into the DB based on that one paste. I then take those values and run queries against them to display the information back in a VERY simple and easy to understand way. The game is team based and each team has 25 users to it. So each team is a group and each row is ONE users information. The users can update all 25 rows or just one row at a time. I require storing things into cache because the site is very slow doing over 1,000 queries almost every minute. So here is the deal. Imagine I have an excel spreadsheet with 100 columns and 5000 rows. Each row has two unique identifiers. One for the row it self and one to group together 25 rows a piece. There are about 10 columns in the row that will almost never change and the other 90 columns will always be changing. We can say some will even change in a matter of seconds depending on how fast the row is updated. Rows can also be added and deleted from the group, but not from the database. The rows are taken from about 4 queries from the database to show the most recent and updated data from the database. So every time something in the database is updated, I would also like the row to be updated. If a row or a group has not been updated in 12 or so hours, it will be taken out of Cache. Once the user calls the group again via the DB queries. They will be placed into Cache. The above is what I would like. That is the wish. In Reality, I still have all the rows, but the way I store them in Cache is currently broken. I store each row in a class and the class is stored in the Server Cache via a HUGE list. When I go to update/Delete/Insert items in the list or rows, most the time it works, but sometimes it throws errors because the cache has changed. I want to be able to lock down the cache like the database throws a lock on a row more or less. I have DateTime stamps to remove things after 12 hours, but this almost always breaks because other users are updating the same 25 rows in the group or just the cache has changed. This is an example of how I add items to Cache, this one shows I only pull the 10 or so columns that very rarely change. This example all removes rows not updated after 12 hours: DateTime dt = DateTime.UtcNow; if (HttpContext.Current.Cache["GetRows"] != null) { List<RowIdentifiers> pis = (List<RowIdentifiers>)HttpContext.Current.Cache["GetRows"]; var ch = (from xx in pis where xx.groupID == groupID where xx.rowID== rowID select xx).ToList(); if (ch.Count() == 0) { var ck = GetInGroupNotCached(rowID, groupID, dt); //Pulling the group from the DB for (int i = 0; i < ck.Count(); i++) pis.Add(ck[i]); pis.RemoveAll((x) => x.updateDateTime < dt.AddHours(-12)); HttpContext.Current.Cache["GetRows"] = pis; return ck; } else return ch; } else { var pis = GetInGroupNotCached(rowID, groupID, dt);//Pulling the group from the DB HttpContext.Current.Cache["GetRows"] = pis; return pis; } On the last point, I remove items from the cache, so the cache doesn't actually get huge. To re-post the question, Whats a better way of doing this? Maybe and how to put locks on the cache? Can I get better than this? I just want it to stop breaking when removing or adding rows.

    Read the article

  • How to pipe two CORE::system commands in a cross-platform way

    - by Pedro Silva
    I'm writing a System::Wrapper module to abstract away from CORE::system and the qx operator. I have a serial method that attempts to connect command1's output to command2's input. I've made some progress using named pipes, but POSIX::mkfifo is not cross-platform. Here's part of what I have so far (the run method at the bottom basically calls system): package main; my $obj1 = System::Wrapper->new( interpreter => 'perl', arguments => [-pe => q{''}], input => ['input.txt'], description => 'Concatenate input.txt to STDOUT', ); my $obj2 = System::Wrapper->new( interpreter => 'perl', arguments => [-pe => q{'$_ = reverse $_}'}], description => 'Reverse lines of input input', output => { '>' => 'output' }, ); $obj1->serial( $obj2 ); package System::Wrapper; #... sub serial { my ($self, @commands) = @_; eval { require POSIX; POSIX->import(); require threads; }; my $tmp_dir = File::Spec->tmpdir(); my $last = $self; my @threads; push @commands, $self; for my $command (@commands) { croak sprintf "%s::serial: type of args to serial must be '%s', not '%s'", ref $self, ref $self, ref $command || $command unless ref $command eq ref $self; my $named_pipe = File::Spec->catfile( $tmp_dir, int \$command ); POSIX::mkfifo( $named_pipe, 0777 ) or croak sprintf "%s::serial: couldn't create named pipe %s: %s", ref $self, $named_pipe, $!; $last->output( { '>' => $named_pipe } ); $command->input( $named_pipe ); push @threads, threads->new( sub{ $last->run } ); $last = $command; } $_->join for @threads; } #... My specific questions: Is there an alternative to POSIX::mkfifo that is cross-platform? Win32 named pipes don't work, as you can't open those as regular files, neither do sockets, for the same reasons. The above doesn't quite work; the two threads get spawned correctly, but nothing flows across the pipe. I suppose that might have something to do with pipe deadlocking or output buffering. What throws me off is that when I run those two commands in the actual shell, everything works as expected.

    Read the article

  • Adding a clustered index to a SQL table: what dangers exist for a live production system?

    - by MoSlo
    Right, keep in mind i need to describe this by abstracting all possible confidential info: I've been put in charge of a 10-year old transactional system of which the majority business logic is implemented at database level (triggers, stored procedures etc). Win2000 server, MSSQL 2000 Enterprise. No immediate plans for replacing/updating the system are being considered :( The core process is a program that executes transactions - specifically, it executes a stored procedure with various parameters, lets call it sp_ProcessTrans. The program executes the stored procedure at asynchronous intervals. By itself, things work fine. But there are 30 instances of this program on remotely located workstations, all of them asynchronously executing sp_ProcessTrans and then retrieving data from the SQL server (execution is pretty regular - ranging 0 to 60 times a minute, depending on what items the program instance is responsible for) . Performance of the system has dropped considerably with 10 yrs of data growth: the reason is the deadlocks and specifically deadlock wait times. The deadlock is on the Employee table. I have discovered: In sp_ProcessTrans' execution, it selects from an Employee table 7 times (dont ask) The select is done on a field that is NOT the primary key No index exists on this field. Thus a table scan is performed. 7 times. per transaction So the reason for deadlocks is clear. I created a non-unique ordered clustered index on the field (field looks good, almost unique, NUM(7), very rarely changes). Immediate improvement in the test environment. The problem is that i cannot simulate the deadlocks in a test environment (I'd need 30 workstations; i'd need to simulate 'realistic' activity on those stations, so visualization is out). I need to know if i must schedule downtime. Creating an index shouldn't be a risky operation for MSSQL, but is there any danger (data corruption in transactions/select statements/extra wait time etc) to create this field index on the production database while the transactions are still taking place? (although i can select a time when transactions are fairly quiet through the 30 stations) Are there any hidden dangers i'm not seeing (not looking forward to needing to restore the DB if something goes wrong, restoring would take a lot of time with 10yrs of data).

    Read the article

  • C++ Filling an 1D array to represent a n-dimensional object based on a straight line segment

    - by Ben
    I'm struggling to find a good way to put this question but here goes. I'm making a system that uses a 1D array implemented as double * parts_ = new double[some_variable];. I want to use this to hold co-ordinates for a particle system that can run in various dimensions. What I want to be able to do is write a generic fill algorithm for filling this in n-dimensions with a common increment in all direction to a variable size. Examples will serve best I think. Consider the case where the number of particles stored by the array is 4 In 1D this produces 4 elements in the array because each particle only has one co-ordinate. 1D: {0, 25, 50, 75}; In 2D this produces 8 elements in the array because each particle has two co-ordinates.. 2D: {0, 0, 0, 25, 25, 0, 25, 25} In 3D this produces 12 elements in the array because each particle now has three co-ordinates {0, 0, 0, 0, 0, 25, 0, 0, 50, ... } These examples are still not quite accurate, but they hopefully will suffice. The way I would do this normally for two dimensions: int i = 0; for(int x = 0; x < parts_size_ / dims_ / dims_ * 25; x += 25) { for(int y = 0; y < parts_size_ / dims_ / dims_ * 25; y += 25) { parts_[i] = x; parts_[i+1] = y; i+=2; // Indentation hates me today .< How can I implement this for n-dimensions where 25 can be any number? The straight line part is because it seems to me logical that a line is a somewhat regular shape in 1D, as is a square in 2D, and a cube in 3D. It seems to me that it would follow that there would be similar shapes in this family that could be implemented for 4D and higher dimensions via a similar fill pattern. This is the shape I wish to set my array to represent.

    Read the article

  • Windows MAchine Debugging

    - by PrettyFlower
    I've been learning how to program for Windows for some time now and am getting pretty comfy with COM. I had thought to go over to Linux and do some C++ programming there and I wished to run Rosetta Commons so I installed Fedora. I had tried installing Ubuntu a few months ago and things got messy. I had a glitch, maybe caused by one of the live cd creators, my video card or something I don't know. Who Crashed suggested it was my video card and I had regular messages about ntfs.sys and page file issues. At any rate I just installed Fedora and the same thing is happening again. I would like to think with the twenty five years of doing this that I might finally make some headway into debugging my system. I think I may have overlooked a lot of what could be done in favor of simply uninstalling, reinstalling and formatting and starting from scratch. I have opened up the folder windows debugging tools, quite accidentally and just before I was going to clean sweep again, and I found KD and WinDbg. I had never seen these before and I felt that maybe I should look into this. I am quite familiar with the modern machine that is known as the computer, I know what a Kernel is and am now pretty familiar with at the very least Windows Operating System Services. I wish to begin tracking my own machines errors. I understand that most kernel debugging is done on a second machine but I don't have one. And also I understand the goal of the debugger seems to be less about run of the mill errors and more about development time strategies but I'm sure there is more to this. This is my first go at this and I thought maybe I could get some suggestions on where to go from here. I would really like to learn ways to fix my machine and also maybe pick up some tricks on the dev side as well. I hope this isn't too broad a question or too generalized. I'm really just looking for the keywords and an overview of the more routine strategies used. thx

    Read the article

  • Rails controller not rendering correct view when form is force-submitted by Javascript

    - by whazzmaster
    I'm using Rails with jQuery, and I'm working on a page for a simple site that prints each record to a table. The only editable field for each record is a checkbox. My goal is that every time a checkbox is changed, an ajax request updates that boolean attribute for the record (i.e., no submit button). My view code: <td> <% form_remote_tag :url => admin_update_path, :html => { :id => "form#{lead.id}" } do %> <%= hidden_field :lead, :id, :value => lead.id %> <%= check_box :lead, :contacted, :id => "checkbox"+lead.id.to_s, :checked => lead.contacted, :onchange => "$('#form#{lead.id}').submit();" %> <% end %> </td> In my routes.rb, admin_update_path is defined by map.admin_update 'update', :controller => "admin", :action => "update", :method => :post I also have an RJS template to render back an update. The contents of this file is currently just for testing (I just wanted to see if it worked, this will not be the ultimate functionality on a successful save)... page << "$('#checkbox#{@lead.id}').hide();" When clicked, the ajax request is successfully sent, with the correct params, and the action on the controller can retrieve the record and update it just fine. The problem is that it doesn't send back the JS; it changes the page in the browser and renders the generated Javascript as plain text rather than executing it in-place. Rails does some behind-the-scenes stuff to figure out if the incoming request is an ajax call, and I can't figure out why it's interpreting the incoming request as a regular web request as opposed to an ajax request. I may be missing something extremely simple here, but I've kind-of burned myself out looking so I thought I'd ask for another pair of eyes. Thanks in advance for any info!

    Read the article

  • Solution Output Directory

    - by L.E.O
    The project that I'm currently working on is being developed by multiple teams where each team is responsible for different part of the project. They all have set up their own C# projects and solutions with configuration settings specific to their own needs. However, now we need to create another, global solution, which will combine and build all projects into the same output directory. The problem that I have encountered though, is that I have found only one way to make all projects build into the same output directory - I need to modify configurations for all of them. That is what we would like to avoid. We would prefer that all these projects had no knowledge about this "global" solution. Each team must retain possibility to work just with their own sub-solution. One possible workaround is to create a special configuration for all projects just for this "global" solution, but that could create extra problems since now you have to constantly sync this configuration settings with the regular one, used by that specific team. Last thing we want to do is to spend hours trying to figure out why something doesn't work when building under global solution just because of some check box that developers have checked in their configuration, but forgot to do so in the global configuration. So, to simplify, we need some sort of output directory setting or post build event that would only be present when building from that global, all-inclusive solution. Is there any way to achieve this without changing something in projects configurations? Update 1: Some extra details I guess I need to mention: We need this global solution to be as close as possible to what the end user gets when he installs our application, since we intend to use it for debugging of the entire application when we need to figure out which part of the application isn't working before sending this bug to the team working on that part. This means that when building under global solution the output directory hierarchy should be the same as it would be in Program Files after installation, so that if, for example, we have Program Files/MyApplication/Addins folder which contains all the addins developed by different teams, we need the global solution to copy the binaries from addins projects and place them in the output directory accordingly. The thing is, the team developing an addin doest necessary know that it is an addin and that it should be placed in that folder, so they cannot change their relative output directory to be build/bin/Debug/Addins.

    Read the article

  • jQuery .ajax doesn't load Google Adsense

    - by Sahas Katta
    Hey Everyone, Just ran into an odd issue. I have a simple WP loop and instead of regular NEXT/BACK pages, I use a jQuery powered $.ajax get to append the following page to the current page. It works perfectly. However, I choose to insert a Google Adsense unit every 5th story. Unfortunately, the Adsense unit that is brought in with a second, third, or etc page load don't render. Here's my loop: 10 stories per page, Adsense after the 4th one. <?php $count = 0; ?> <?php if ( have_posts() ) : ?> <?php while ( have_posts() ) : the_post(); ?> <?php $count++; ?> <div class="card"> <div class="title"> <a href="<?php the_permalink(); ?>" title="<?php the_title(); ?>"><span><?php the_title(); ?></span></a> </div> </div> <?php if ($count == 4) : ?> <div class="card"> <!-- ADSENSE CODE HERE (Straight from Google Adsense Panel, no tweaks.) --> </div> <?php endif; ?> As for my jQuery script, here's how that looks: $.ajax({ url: nextPageLink, type: 'GET', success: function(data) { $(data).find('#reviews .card').appendTo('#reviews'); }, error: function(xhr, status, error) { $('.loadination').addClass('hidden'); } }); Keep in mind, I just simplified my code to give you guys an example. The code above was just the essentials. All the loading stuff works perfectly. Images, text, links, etc all load just fine. However, the Google Adsense unit doesn't. Any help would be appreciated. Thanks and Happy Holidays!

    Read the article

  • The dynamic Type in C# Simplifies COM Member Access from Visual FoxPro

    - by Rick Strahl
    I’ve written quite a bit about Visual FoxPro interoperating with .NET in the past both for ASP.NET interacting with Visual FoxPro COM objects as well as Visual FoxPro calling into .NET code via COM Interop. COM Interop with Visual FoxPro has a number of problems but one of them at least got a lot easier with the introduction of dynamic type support in .NET. One of the biggest problems with COM interop has been that it’s been really difficult to pass dynamic objects from FoxPro to .NET and get them properly typed. The only way that any strong typing can occur in .NET for FoxPro components is via COM type library exports of Visual FoxPro components. Due to limitations in Visual FoxPro’s type library support as well as the dynamic nature of the Visual FoxPro language where few things are or can be described in the form of a COM type library, a lot of useful interaction between FoxPro and .NET required the use of messy Reflection code in .NET. Reflection is .NET’s base interface to runtime type discovery and dynamic execution of code without requiring strong typing. In FoxPro terms it’s similar to EVALUATE() functionality albeit with a much more complex API and corresponiding syntax. The Reflection APIs are fairly powerful, but they are rather awkward to use and require a lot of code. Even with the creation of wrapper utility classes for common EVAL() style Reflection functionality dynamically access COM objects passed to .NET often is pretty tedious and ugly. Let’s look at a simple example. In the following code I use some FoxPro code to dynamically create an object in code and then pass this object to .NET. An alternative to this might also be to create a new object on the fly by using SCATTER NAME on a database record. How the object is created is inconsequential, other than the fact that it’s not defined as a COM object – it’s a pure FoxPro object that is passed to .NET. Here’s the code: *** Create .NET COM InstanceloNet = CREATEOBJECT('DotNetCom.DotNetComPublisher') *** Create a Customer Object Instance (factory method) loCustomer = GetCustomer() loCustomer.Name = "Rick Strahl" loCustomer.Company = "West Wind Technologies" loCustomer.creditLimit = 9999999999.99 loCustomer.Address.StreetAddress = "32 Kaiea Place" loCustomer.Address.Phone = "808 579-8342" loCustomer.Address.Email = "[email protected]" *** Pass Fox Object and echo back values ? loNet.PassRecordObject(loObject) RETURN FUNCTION GetCustomer LOCAL loCustomer, loAddress loCustomer = CREATEOBJECT("EMPTY") ADDPROPERTY(loCustomer,"Name","") ADDPROPERTY(loCustomer,"Company","") ADDPROPERTY(loCUstomer,"CreditLimit",0.00) ADDPROPERTY(loCustomer,"Entered",DATETIME()) loAddress = CREATEOBJECT("Empty") ADDPROPERTY(loAddress,"StreetAddress","") ADDPROPERTY(loAddress,"Phone","") ADDPROPERTY(loAddress,"Email","") ADDPROPERTY(loCustomer,"Address",loAddress) RETURN loCustomer ENDFUNC Now prior to .NET 4.0 you’d have to access this object passed to .NET via Reflection and the method code to do this would looks something like this in the .NET component: public string PassRecordObject(object FoxObject) { // *** using raw Reflection string Company = (string) FoxObject.GetType().InvokeMember( "Company", BindingFlags.GetProperty,null, FoxObject,null); // using the easier ComUtils wrappers string Name = (string) ComUtils.GetProperty(FoxObject,"Name"); // Getting Address object – then getting child properties object Address = ComUtils.GetProperty(FoxObject,"Address");    string Street = (string) ComUtils.GetProperty(FoxObject,"StreetAddress"); // using ComUtils 'Ex' functions you can use . Syntax     string StreetAddress = (string) ComUtils.GetPropertyEx(FoxObject,"AddressStreetAddress"); return Name + Environment.NewLine + Company + Environment.NewLine + StreetAddress + Environment.NewLine + " FOX"; } Note that the FoxObject is passed in as type object which has no specific type. Since the object doesn’t exist in .NET as a type signature the object is passed without any specific type information as plain non-descript object. To retrieve a property the Reflection APIs like Type.InvokeMember or Type.GetProperty().GetValue() etc. need to be used. I made this code a little simpler by using the Reflection Wrappers I mentioned earlier but even with those ComUtils calls the code is pretty ugly requiring passing the objects for each call and casting each element. Using .NET 4.0 Dynamic Typing makes this Code a lot cleaner Enter .NET 4.0 and the dynamic type. Replacing the input parameter to the .NET method from type object to dynamic makes the code to access the FoxPro component inside of .NET much more natural: public string PassRecordObjectDynamic(dynamic FoxObject) { // *** using raw Reflection string Company = FoxObject.Company; // *** using the easier ComUtils class string Name = FoxObject.Name; // *** using ComUtils 'ex' functions to use . Syntax string Address = FoxObject.Address.StreetAddress; return Name + Environment.NewLine + Company + Environment.NewLine + Address + Environment.NewLine + " FOX"; } As you can see the parameter is of type dynamic which as the name implies performs Reflection lookups and evaluation on the fly so all the Reflection code in the last example goes away. The code can use regular object ‘.’ syntax to reference each of the members of the object. You can access properties and call methods this way using natural object language. Also note that all the type casts that were required in the Reflection code go away – dynamic types like var can infer the type to cast to based on the target assignment. As long as the type can be inferred by the compiler at compile time (ie. the left side of the expression is strongly typed) no explicit casts are required. Note that although you get to use plain object syntax in the code above you don’t get Intellisense in Visual Studio because the type is dynamic and thus has no hard type definition in .NET . The above example calls a .NET Component from VFP, but it also works the other way around. Another frequent scenario is an .NET code calling into a FoxPro COM object that returns a dynamic result. Assume you have a FoxPro COM object returns a FoxPro Cursor Record as an object: DEFINE CLASS FoxData AS SESSION OlePublic cAppStartPath = "" FUNCTION INIT THIS.cAppStartPath = ADDBS( JustPath(Application.ServerName) ) SET PATH TO ( THIS.cAppStartpath ) ENDFUNC FUNCTION GetRecord(lnPk) LOCAL loCustomer SELECT * FROM tt_Cust WHERE pk = lnPk ; INTO CURSOR TCustomer IF _TALLY < 1 RETURN NULL ENDIF SCATTER NAME loCustomer MEMO RETURN loCustomer ENDFUNC ENDDEFINE If you call this from a .NET application you can now retrieve this data via COM Interop and cast the result as dynamic to simplify the data access of the dynamic FoxPro type that was created on the fly: int pk = 0; int.TryParse(Request.QueryString["id"],out pk); // Create Fox COM Object with Com Callable Wrapper FoxData foxData = new FoxData(); dynamic foxRecord = foxData.GetRecord(pk); string company = foxRecord.Company; DateTime entered = foxRecord.Entered; This code looks simple and natural as it should be – heck you could write code like this in days long gone by in scripting languages like ASP classic for example. Compared to the Reflection code that previously was necessary to run similar code this is much easier to write, understand and maintain. For COM interop and Visual FoxPro operation dynamic type support in .NET 4.0 is a huge improvement and certainly makes it much easier to deal with FoxPro code that calls into .NET. Regardless of whether you’re using COM for calling Visual FoxPro objects from .NET (ASP.NET calling a COM component and getting a dynamic result returned) or whether FoxPro code is calling into a .NET COM component from a FoxPro desktop application. At one point or another FoxPro likely ends up passing complex dynamic data to .NET and for this the dynamic typing makes coding much cleaner and more readable without having to create custom Reflection wrappers. As a bonus the dynamic runtime that underlies the dynamic type is fairly efficient in terms of making Reflection calls especially if members are repeatedly accessed. © Rick Strahl, West Wind Technologies, 2005-2010Posted in COM  FoxPro  .NET  CSharp  

    Read the article

  • Our Look at Opera 10.50 Web Browser

    - by Asian Angel
    Everyone has been talking about the newest version of Opera recently but perhaps you have not looked at it too closely yet. Today we will take a look at 10.50 and let you see what this “new browser” is all about. The New Engines Carakan JavaScript Engine: Runs web applications up to 7 times faster than its predecessor Futhark Vega Graphics Library: Enables super fast and smooth graphics on everything from tab switching to webpage animation Presto 2.5: Provides support for HTML5, CSS2.1 and the latest CSS3 standards A Look at the Features Available If you have installed or used older versions of Opera before then the default look after a clean install will probably seem rather different. The main differences in appearance are mainly located within the “glass border” areas of the browser. The “Speed Dial” setup looks and works just as well as in previous versions. You can set a favorite wallpaper or image as your background and choose the number of “dials” using the “Configure Speed Dial Command”. One of the “standout” differences is the “O Button”. All of the menus have been condensed into this single access point but it only takes a few moments to find what you are looking for. If you have used the style before in earlier versions of Opera some of the items have been moved around. For those who prefer the “Menu Bar” that can be easily restored using the “Show Menu Bar Command”. If desired you can actually “extend” the “Tab Bar” downwards to display thumbnails of your open tabs. Just use your mouse to grab the bottom of the “Tab Bar” and adjust it to suit your personal needs. The only problem with this feature is that it will quickly use up a good sized portion of your available UI and browser window space. The “Password Manager” is ready to access when needed…the background for the button will turn a shiny metallic blue when you open a webpage that you have “Login Information” saved for. One of the new features is a small “Recycle Bin Button” in the upper right corner. Clicking on this will display a list of recently closed tabs letting you have easy access to any tabs that you may have accidentally closed. This is definitely a great feature to have as an easy access button. For those who were used to how the “Zoom Feature” looked before it has a new “look” to it. Instead of the pop-up menu-type listing of “view sizes” present before you now have a slider button that you can use to adjust the zooming level. For our default setup here the “Sidebar Panels” available were: “Bookmarks, Widgets, Unite, Notes, Downloads, History, & Panels”. Additional panels such as “Links, Windows, Search, Info, etc.” are available if you want and/or need them (accessible using the “Panels Plus Sign Button”). The “Opera Link Button” makes it easy for you to synchronize your “Speed Dial, Bookmarks, Personal Bar, Custom Searches, History & Notes”. Note: “Opera Link” requires an account and can be signed up for using the link provided below. Want to share files with your family and friends? “Unite” allows you to do that and more. With “Unite” you can: “Stream Music, Show Photo Galleries, Share Files and/or Folders, & host webpages directly from your browser”. We have a more in-depth look at “Unite” in our article here. Note: Use of “Unite” requires an Opera account. Got a slow internet connection? “Opera Turbo” can help with that by running the web traffic through their “compression servers” to speed up your web browsing. Keep in mind that “Opera Turbo” will not engage if you are accessing a secure website (i.e. your bank’s website) thus preserving your security. Note: “Opera Turbo” can be set up to automatically detect slow internet connections (i.e. crowded Wi-Fi in a cafe). Opera has a built-in “Private Browsing Mode” now for those who prefer anonymous browsing and want to keep the “history records clean” on their computer. To access it go to “Tabs and windows” and select “New private tab” or “New private window” as desired. When you open your new “Private Tab or Window” you will see the following message with details on how Opera will handle browsing information and a large “door hanger symbol”. Notice that the one tab is locked into “Private Browsing Mode” while the others are still working in “Regular Browsing Mode”. Very nice! A miniature version of the “door hanger symbol” will be present on any tab that is locked into “Private Browsing Mode”. If you are using Windows 7 then you will love how things look from your “Taskbar”. Here you can see four very nice looking thumbnails for the tabs that we had open. All that you have to do is click on the desired thumbnail… The “Context Menu” looks just as lovely as the thumbnails and definitely has some terrific functionality built into it. Add Enhanced Aero Capability If you love “Aero” and want more for your new Opera install then we have the perfect theme for you. The theme’s name is Z1-AV69 and once you have downloaded it you will need to place it in the “Skins Subfolder” in Opera’s “Program Files Folder”. Note: For our example we used version 1.10 but version 2.00 is now available (link provided below). Once you have restarted Opera, go to the “O Menu” and select “Appearance”. When the “Appearance Window” opens click on “Z1-Glass Skin” and then click “OK”. All of a sudden you will have more “Aero Goodness” to enjoy. Compare this screenshot with the one at the top of this article…the only part that is not transparent now is the browser window area itself. Want even more “Aero Goodness”? Right click on the “Tab Bar” and set “Tab Bar Placement” to “Left”. Note: You can achieve the same effect by setting the “Tab Bar Placement” to “Right”. With the “Speed Dial” visible you will be able to see your wallpaper with ease. While this is obviously not for everyone it does make for a great visual trick. Portable Versions Perhaps you need this wonderful new version of Opera to go with you wherever you do during the day. Not a problem…just visit the Opera USB website to choose a version that works best for you. You can select from “Zip or Exe” setup files and if needed update an older portable version using a “Zipped Update Files Package”. If you are updating an older version keep in mind that you will need to delete the old “OperaUSB.exe. File” due to changes with the new setup files. During our tests updating older portable versions went well for the most part but we did experience a few “odd UI quirks” here and there…so we recommend setting up a clean install if possible. Conclusion The new 10.50 release is a pleasure to use and is a recommended install for your system. Whether you are considering trying Opera for the first time or have been using it for a bit we think that you will pleased with everything that the 10.50 release has to offer. For those who would like to add User Scripts to Opera be certain to look at our how-to article here. Links Download Opera 10.50 for your location (Windows) Get the latest Snapshot versions for Linux & Mac Sign up for an Opera Link account View In-Depth detail on Opera 10.50’s features Download the Z1-AV69 Aero Theme Download Portable Opera 10.50 Similar Articles Productive Geek Tips Set the Speed Dial as the Opera Startup PageSet Up User Scripts in Opera BrowserScan Files for Viruses Before You Download With Dr.WebTurn Your Computer into a File, Music, and Web Server with Opera UniteSet the Default Browser on Ubuntu From the Command Line TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Make your Joomla & Drupal Sites Mobile with OSMOBI Integrate Twitter and Delicious and Make Life Easier Design Your Web Pages Using the Golden Ratio Worldwide Growth of the Internet How to Find Your Mac Address Use My TextTools to Edit and Organize Text

    Read the article

< Previous Page | 241 242 243 244 245 246 247 248 249 250 251 252  | Next Page >