Search Results

Search found 11306 results on 453 pages for 'methods'.

Page 416/453 | < Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >

  • How do I create/use a Fluent NHibernate convention to map UInt32 properties to an SQL Server 2008 da

    - by dommer
    I'm trying to use a convention to map UInt32 properties to a SQL Server 2008 database. I don't seem to be able to create a solution based on existing web sources, due to updates in the way Fluent NHibernate works - i.e. examples are out of date. Here's my code as it currently stands (which, when I try to expose the schema, fails due to SQL Server not supporting UInt32). Apologies for the code being a little long, but I'm not 100% sure what is relevant to the problem, so I'm erring on the side of caution. I think I'll need a relatively comprehensive example, as I don't seem to be able to pull the pieces together into a working solution, at present. FluentConfiguration configuration = Fluently.Configure() .Database(MsSqlConfiguration.MsSql2008 .ConnectionString(connectionString)) .Mappings(mapping => mapping.AutoMappings.Add( AutoMap.AssemblyOf<Product>() .Conventions.Add<UInt32UserTypeConvention>())); configuration.ExposeConfiguration(x => new SchemaExport(x).Create(false, true)); namespace NHibernateTest { public class UInt32UserTypeConvention : UserTypeConvention<UInt32UserType> { // Empty. } } namespace NHibernateTest { public class UInt32UserType : IUserType { // Public properties. public bool IsMutable { get { return false; } } public Type ReturnedType { get { return typeof(UInt32); } } public SqlType[] SqlTypes { get { return new SqlType[] { SqlTypeFactory.Int32 }; } } // Public methods. public object Assemble(object cached, object owner) { return cached; } public object DeepCopy(object value) { return value; } public object Disassemble(object value) { return value; } public new bool Equals(object x, object y) { return (x != null && x.Equals(y)); } public int GetHashCode(object x) { return x.GetHashCode(); } public object NullSafeGet(IDataReader rs, string[] names, object owner) { int? i = (int?)NHibernateUtil.Int32.NullSafeGet(rs, names[0]); return (UInt32?)i; } public void NullSafeSet(IDbCommand cmd, object value, int index) { UInt32? u = (UInt32?)value; int? i = (Int32?)u; NHibernateUtil.Int32.NullSafeSet(cmd, i, index); } public object Replace(object original, object target, object owner) { return original; } } }

    Read the article

  • Having trouble wrapping functions in the linux kernel

    - by Corey Henderson
    I've written a LKM that implements Trusted Path Execution (TPE) into your kernel: https://github.com/cormander/tpe-lkm I run into an occasional kernel OOPS (describe at the end of this question) when I define WRAP_SYSCALLS to 1, and am at my wit's end trying to track it down. A little background: Since the LSM framework doesn't export its symbols, I had to get creative with how I insert the TPE checking into the running kernel. I wrote a find_symbol_address() function that gives me the address of any function I need, and it works very well. I can call functions like this: int (*my_printk)(const char *fmt, ...); my_printk = find_symbol_address("printk"); (*my_printk)("Hello, world!\n"); And it works fine. I use this method to locate the security_file_mmap, security_file_mprotect, and security_bprm_check functions. I then overwrite those functions with an asm jump to my function to do the TPE check. The problem is, the currently loaded LSM will no longer execute the code for it's hook to that function, because it's been totally hijacked. Here is an example of what I do: int tpe_security_bprm_check(struct linux_binprm *bprm) { int ret = 0; if (bprm->file) { ret = tpe_allow_file(bprm->file); if (IS_ERR(ret)) goto out; } #if WRAP_SYSCALLS stop_my_code(&cs_security_bprm_check); ret = cs_security_bprm_check.ptr(bprm); start_my_code(&cs_security_bprm_check); #endif out: return ret; } Notice the section between the #if WRAP_SYSCALLS section (it's defined as 0 by default). If set to 1, the LSM's hook is called because I write the original code back over the asm jump and call that function, but I run into an occasional kernel OOPS with an "invalid opcode": invalid opcode: 0000 [#1] SMP RIP: 0010:[<ffffffff8117b006>] [<ffffffff8117b006>] security_bprm_check+0x6/0x310 I don't know what the issue is. I've tried several different types of locking methods (see the inside of start/stop_my_code for details) to no avail. To trigger the kernel OOPS, write a simple bash while loop that endlessly starts a backgrounded "ls" command. After a minute or so, it'll happen. I'm testing this on a RHEL6 kernel, also works on Ubuntu 10.04 LTS (2.6.32 x86_64). While this method has been the most successful so far, I have tried another method of simply copying the kernel function to a pointer I created with kmalloc but when I try to execute it, I get: kernel tried to execute NX-protected page - exploit attempt? (uid: 0). If anyone can tell me how to kmalloc space and have it marked as executable, that would also help me solve the above problem. Any help is appreciated!

    Read the article

  • Speed up a web service for auto complete and avoid too many method calls.

    - by jphenow
    So I've got my jquery autocomplete 'working,' but its a little fidgety since I call the webservice method each time a keydown() fires so I get lots of methods hanging and sometimes to get the "auto" to work I have to type it out and backspace a bit because i'm assuming it got its return value a little slow. I've limited the query results to 8 to mininmize time. Is there anything i can do to make this a little snappier? This thing seems near useless if I don't get it a little more responsive. javascript $("#clientAutoNames").keydown(function () { $.ajax({ type: "POST", url: "WebService.asmx/LoadData", data: "{'input':" + JSON.stringify($("#clientAutoNames").val()) + "}", contentType: "application/json; charset=utf-8", dataType: "json", success: function (data) { if (data.d != null) { var serviceScript = data.d; } $("#autoNames").html(serviceScript); $('#clientAutoNames').autocomplete({ minLength: 2, source: autoNames, delay: 100, focus: function (event, ui) { $('#project').val(ui.item.label); return false; }, select: function (event, ui) { $('#clientAutoNames').val(ui.item.label); $('#projectid').val(ui.item.value); $('#project-description').html(ui.item.desc); pkey = $('#project-id').val; return false; } }) .data("autocomplete")._renderItem = function (ul, item) { return $("<li></li>") .data("item.autocomplete", item) .append("<a>" + item.label + "<br>" + item.desc + "</a>") .appendTo(ul); } } }); }); WebService.asmx <WebMethod()> _ Public Function LoadData(ByVal input As String) As String Dim result As String = "<script>var autoNames = [" Dim sqlOut As Data.SqlClient.SqlDataReader Dim connstring As String = *Datasource* Dim strSql As String = "SELECT TOP 2 * FROM v_Clients WHERE (SearchName Like '" + input + "%') ORDER BY SearchName" Dim cnn As Data.SqlClient.SqlConnection = New Data.SqlClient.SqlConnection(connstring) Dim cmd As Data.SqlClient.SqlCommand = New Data.SqlClient.SqlCommand(strSql, cnn) cnn.Open() sqlOut = cmd.ExecuteReader() Dim c As Integer = 0 While sqlOut.Read() result = result + "{" result = result + "value: '" + sqlOut("ContactID").ToString() + "'," result = result + "label: '" + sqlOut("SearchName").ToString() + "'," 'result = result + "desc: '" + title + " from " + company + "'," result = result + "}," End While result = result + "];</script>" sqlOut.Close() cnn.Close() Return result End Function I'm sure I'm just going about this slightly wrong or not doing a better balance of calls or something. Greatly appreciated!

    Read the article

  • Minutia on Objective-C Categories and Extensions.

    - by Matt Wilding
    I learned something new while trying to figure out why my readwrite property declared in a private Category wasn't generating a setter. It was because my Category was named: // .m @interface MyClass (private) @property (readwrite, copy) NSArray* myProperty; @end Changing it to: // .m @interface MyClass () @property (readwrite, copy) NSArray* myProperty; @end and my setter is synthesized. I now know that Class Extension is not just another name for an anonymous Category. Leaving a Category unnamed causes it to morph into a different beast: one that now gives compile-time method implementation enforcement and allows you to add ivars. I now understand the general philosophies underlying each of these: Categories are generally used to add methods to any class at runtime, and Class Extensions are generally used to enforce private API implementation and add ivars. I accept this. But there are trifles that confuse me. First, at a hight level: Why differentiate like this? These concepts seem like similar ideas that can't decide if they are the same, or different concepts. If they are the same, I would expect the exact same things to be possible using a Category with no name as is with a named Category (which they are not). If they are different, (which they are) I would expect a greater syntactical disparity between the two. It seems odd to say, "Oh, by the way, to implement a Class Extension, just write a Category, but leave out the name. It magically changes." Second, on the topic of compile time enforcement: If you can't add properties in a named Category, why does doing so convince the compiler that you did just that? To clarify, I'll illustrate with my example. I can declare a readonly property in the header file: // .h @interface MyClass : NSObject @property (readonly, copy) NSString* myString; @end Now, I want to head over to the implementation file and give myself private readwrite access to the property. If I do it correctly: // .m @interface MyClass () @property (readonly, copy) NSString* myString; @end I get a warning when I don't synthesize, and when I do, I can set the property and everything is peachy. But, frustratingly, if I happen to be slightly misguided about the difference between Category and Class Extension and I try: // .m @interface MyClass (private) @property (readonly, copy) NSString* myString; @end The compiler is completely pacified into thinking that the property is readwrite. I get no warning, and not even the nice compile error "Object cannot be set - either readonly property or no setter found" upon setting myString that I would had I not declared the readwrite property in the Category. I just get the "Does not respond to selector" exception at runtime. If adding ivars and properties is not supported by (named) Categories, is it too much to ask that the compiler play by the same rules? Am I missing some grand design philosophy?

    Read the article

  • Difference between Cloud and Virtualization

    - by Akash Kava
    Ops: This does not belong to ServerFault because it focuses on Programing Architecture. I have following questions regarding differences between Cloud and Virtualization.. How Cloud is different then Virtualization? Currently I tried to find out pricing of Rackspace, Amazone and all similar cloud providers, I found that our current 6 dedicated servers came cheaper then their pricing. So how one can claim cloud is cheaper? Is it cheaper only in comparison of normal hosting? We re organized our infrastructure in virtual environment to reduce or configuration overhead at time of failure, we did not have to rewrite any peice of code that is already written for earlier setup. So moving to virtualization does not require any re programming. But cloud is absoltely different and it will require entire reprogramming right? Is it really worth to recode when our current IT costs are 3-4 times lower then cloud hosting including raid backups and all sort of clustering for high availability? New programming architecture means new overheads of training staff, new methods of testing and new deployment schemes, does it justify over "on demand resource usage" words of cloud? We are having current development architecture with simple Server side ASP.NET WebServices with no local context and on client side Flex/Silverlight which offers pretty good REST architecture and its highly scalable. How does cloud differs from REST model of deployment? On storage, SQL Server or MySQL offers pretty good replication and high availibility then what is advantage in cloud? Data guarantee, one of our vendor hosting some other customer's app on cloud (one of most used), lost Entire Hard Disk (the virtual) and entire module in first 6 months. Second provider said its your duty to take backup, fine I agree, but no provider gives SLA for data guarantee, they give 99% uptime. However in most business apps, uptime is less important then data integrity. In our 10 years of dedicated hosting experience we had only one hard disk crash. This makes me little skeptical to go for cloud and loosing control over data. And I feel its just a big marketing buzz to sell virtulization in different form. Size of data, currently all providers charge very heavy for large data, if you are hosting only below 100GB cloud can be good alternative, but I think virtual servers and dedicated servers above 100GB to few TBs are still cheaper. Why would want to pay so high on cloud when there is no data guarentee as well as it doesnt say anything about redundancy. (I wish SO had something for spell check for Internet Explorer, sorry for wrong spellings in my post)

    Read the article

  • TestNG - Factories and Dataproviders

    - by Tim K
    Background Story I'm working at a software firm developing a test automation framework to replace our old spaghetti tangled system. Since our system requires a login for almost everything we do, I decided it would be best to use @BeforeMethod, @DataProvider, and @Factory to setup my tests. However, I've run into some issues. Sample Test Case Lets say the software system is a baseball team roster. We want to test to make sure a user can search for a team member by name. (Note: I'm aware that BeforeMethods don't run in any given order -- assume that's been taken care of for now.) @BeforeMethod public void setupSelenium() { // login with username & password // acknowledge announcements // navigate to search page } @Test(dataProvider="players") public void testSearch(String playerName, String searchTerm) { // search for "searchTerm" // browse through results // pass if we find playerName // fail (Didn't find the player) } This test case assumes the following: The user has already logged on (in a BeforeMethod, most likely) The user has already navigated to the search page (trivial, before method) The parameters to the test are associated with the aforementioned login The Problems So lets try and figure out how to handle the parameters for the test case. Idea #1 This method allows us to associate dataproviders with usernames, and lets us use multiple users for any specific test case! @Test(dataProvider="players") public void testSearch(String user, String pass, String name, String search) { // login with user/pass // acknowledge announcements // navigate to search page // ... } ...but there's lots of repetition, as we have to make EVERY function accept two extra parameters. Not to mention, we're also testing the acknowledge announcements feature, which we don't actually want to test. Idea #2 So lets use the factory to initialize things properly! class BaseTestCase { public BaseTestCase(String user, String password, Object[][] data); } class SomeTest { @Factory public void ... } With this, we end up having to write one factory per test case... Although, it does let us have multiple users per test-case. Conclusion I'm about fresh out of ideas. There was another idea I had where I was loading data from an XML file, and then calling the methods from a program... but its getting silly. Any ideas?

    Read the article

  • calling startActivity() inside of a instance method - causing a NullPointerException

    - by Cole
    Heya - I'm trying to call startActivity() from a class that extends AsyncTask in the onPostExecute(). Here's the flow: Class that extends AsyncTask: protected void onPostExecute() { Login login = new Login(); login.pushCreateNewOrChooseExistingFormActivity(); } Class that extends Activity: public void pushCreateNewOrChooseExistingFormActivity() { // start the CreateNewOrChooseExistingForm Activity Intent intent = new Intent(Intent.ACTION_VIEW); **ERROR_HERE*** intent.setClassName(this, CreateNewOrChooseExistingForm.class.getName()); startActivity(intent); } And I get this error… every time: 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): FATAL EXCEPTION: main 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): java.lang.NullPointerException 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): at android.content.ContextWrapper.getPackageName(ContextWrapper.java:120) 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): at android.content.ComponentName.(ComponentName.java:62) 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): at android.content.Intent.setClassName(Intent.java:4850) 03-17 16:04:29.579: ERROR/AndroidRuntime(1503): at com.att.AppName.Login.pushCreateNewOrChooseExistingFormActivity(Login.java:47) For iOS developers - I'm just trying to push a new view controller on to a navigational controller's stack a la pushViewController:animated:. Which apparently - is hard to do on this platform. Any ideas? Thanks in advance! UPDATE - FIXED: per @Falmarri advice, i managed to resolve this issue. first of all, i'm no longer calling Login login = new Login(); to create a new login object. bad. bad. bad. no cookie. instead, when preparing to call .execute(), this tutorial suggests passing the applicationContext to the class the executes the AsyncTask, for my purposes, as shown below: CallWebServiceTask task = new CallWebServiceTask(); // pass the login object to the task task.applicationContext = login; // execute the task in the background, passing the required params task.execute(login); now, in onPostExecute(), i can get to my Login objects methods like so: ((Login) applicationContext).pushCreateNewOrChooseExistingFormActivity(); ((Login) applicationContext).showLoginFailedAlert(result.get("httpResponseCode").toString()); ... hope this helps someone else out there! especially iOS developers transistioning over to Android...

    Read the article

  • C++ Exam Question

    - by Carlucho
    I just took an exam where i was asked the following: Write the function body of each of the methods GenStrLen, InsertChar and StrReverse for the given code bellow. You must take into consideration the following; How strings are constructed in C++ The string must not overflow Insertion of character increases its length by 1 An empty string is indicated by StrLen = 0 class Strings { private: char str[80]; int StrLen; public: // Constructor Strings() { StrLen=0; }; // A function for returning the length of the string 'str' int GetStrLen(void) { }; // A function to inser a character 'ch' at the end of the string 'str' void InsertChar(char ch) { }; // A function to reverse the content of the string 'str' void StrReverse(void) { }; }; The answer I gave was something like this (see bellow). My one of problem is that used many extra variables and that makes me believe am not doing it the best possible way, and the other thing is that is not working.... class Strings { private: char str[80]; int StrLen; int index; // *** Had to add this *** public: Strings(){ StrLen=0; } int GetStrLen(void){ for (int i=0 ; str[i]!='\0' ; i++) index++; return index; // *** Here am getting a weird value, something like 1829584505306 *** } void InsertChar(char ch){ str[index] = ch; // *** Not sure if this is correct cuz I was not given int index *** } void StrRevrse(void){ GetStrLen(); char revStr[index+1]; for (int i=0 ; str[i]!='\0' ; i++){ for (int r=index ; r>0 ; r--) revStr[r] = str[i]; } } }; I would appreciate if anyone could explain me toughly what is the best way to have answered the question and why. Also how come my professor closes each class function like " }; " i thought that was only used for ending classes and constructors only. Thanks a lot for your help.

    Read the article

  • What are the pros and cons of using manual list iteration vs recursion through fail

    - by magus
    I come up against this all the time, and I'm never sure which way to attack it. Below are two methods for processing some season facts. What I'm trying to work out is whether to use method 1 or 2, and what are the pros and cons of each, especially large amounts of facts. methodone seems wasteful since the facts are available, why bother building a list of them (especially a large list). This must have memory implications too if the list is large enough ? And it doesn't take advantage of Prolog's natural backtracking feature. methodtwo takes advantage of backtracking to do the recursion for me, and I would guess would be much more memory efficient, but is it good programming practice generally to do this? It's arguably uglier to follow, and might there be any other side effects? One problem I can see is that each time fail is called, we lose the ability to pass anything back to the calling predicate, eg. if it was methodtwo(SeasonResults), since we continually fail the predicate on purpose. So methodtwo would need to assert facts to store state. Presumably(?) method 2 would be faster as it has no (large) list processing to do? I could imagine that if I had a list, then methodone would be the way to go.. or is that always true? Might it make sense in any conditions to assert the list to facts using methodone then process them using method two? Complete madness? But then again, I read that asserting facts is a very 'expensive' business, so list handling might be the way to go, even for large lists? Any thoughts? Or is it sometimes better to use one and not the other, depending on (what) situation? eg. for memory optimisation, use method 2, including asserting facts and, for speed use method 1? season(spring). season(summer). season(autumn). season(winter). % Season handling showseason(Season) :- atom_length(Season, LenSeason), write('Season Length is '), write(LenSeason), nl. % ------------------------------------------------------------- % Method 1 - Findall facts/iterate through the list and process each %-------------------------------------------------------------- % Iterate manually through a season list lenseason([]). lenseason([Season|MoreSeasons]) :- showseason(Season), lenseason(MoreSeasons). % Findall to build a list then iterate until all done methodone :- findall(Season, season(Season), AllSeasons), lenseason(AllSeasons), write('Done'). % ------------------------------------------------------------- % Method 2 - Use fail to force recursion %-------------------------------------------------------------- methodtwo :- % Get one season and show it season(Season), showseason(Season), % Force prolog to backtrack to find another season fail. % No more seasons, we have finished methodtwo :- write('Done').

    Read the article

  • DRY-ing very similar specs for ASP.NET MVC controller action with MSpec (BDD guidelines)

    - by spapaseit
    Hi all, I have two very similar specs for two very similar controller actions: VoteUp(int id) and VoteDown(int id). These methods allow a user to vote a post up or down; kinda like the vote up/down functionality for StackOverflow questions. The specs are: VoteDown: [Subject(typeof(SomeController))] public class When_user_clicks_the_vote_down_button_on_a_post : SomeControllerContext { Establish context = () => { post = PostFakes.VanillaPost(); post.Votes = 10; session.Setup(s => s.Single(Moq.It.IsAny<Expression<Func<Post, bool>>>())).Returns(post); session.Setup(s => s.CommitChanges()); }; Because of = () => result = controller.VoteDown(1); It should_decrement_the_votes_of_the_post_by_1 = () => suggestion.Votes.ShouldEqual(9); It should_not_let_the_user_vote_more_than_once; } VoteUp: [Subject(typeof(SomeController))] public class When_user_clicks_the_vote_down_button_on_a_post : SomeControllerContext { Establish context = () => { post = PostFakes.VanillaPost(); post.Votes = 0; session.Setup(s => s.Single(Moq.It.IsAny<Expression<Func<Post, bool>>>())).Returns(post); session.Setup(s => s.CommitChanges()); }; Because of = () => result = controller.VoteUp(1); It should_increment_the_votes_of_the_post_by_1 = () => suggestion.Votes.ShouldEqual(1); It should_not_let_the_user_vote_more_than_once; } So I have two questions: How should I go about DRY-ing these two specs? Is it even advisable or should I actually have one spec per controller action? I know I Normally should, but this feels like repeating myself a lot. Is there any way to implement the second It within the same spec? Note that the It should_not_let_the_user_vote_more_than_once; requires me the spec to call controller.VoteDown(1) twice. I know the easiest would be to create a separate spec for it too, but it'd be copying and pasting the same code yet again... I'm still getting the hang of BDD (and MSpec) and many times it is not clear which way I should go, or what the best practices or guidelines for BDD are. Any help would be appreciated.

    Read the article

  • C++ vector and segmentation faults

    - by Headspin
    I am working on a simple mathematical parser. Something that just reads number = 1 + 2; I have a vector containing these tokens. They store a type and string value of the character. I am trying to step through the vector to build an AST of these tokens, and I keep getting segmentation faults, even when I am under the impression my code should prevent this from happening. Here is the bit of code that builds the AST: struct ASTGen { const vector<Token> &Tokens; unsigned int size, pointer; ASTGen(const vector<Token> &t) : Tokens(t), pointer(0) { size = Tokens.size() - 1; } unsigned int next() { return pointer + 1; } Node* Statement() { if(next() <= size) { switch(Tokens[next()].type) { case EQUALS : Node* n = Assignment_Expr(); return n; } } advance(); } void advance() { if(next() <= size) ++pointer; } Node* Assignment_Expr() { Node* lnode = new Node(Tokens[pointer], NULL, NULL); advance(); Node* n = new Node(Tokens[pointer], lnode, Expression()); return n; } Node* Expression() { if(next() <= size) { advance(); if(Tokens[next()].type == SEMICOLON) { Node* n = new Node(Tokens[pointer], NULL, NULL); return n; } if(Tokens[next()].type == PLUS) { Node* lnode = new Node(Tokens[pointer], NULL, NULL); advance(); Node* n = new Node(Tokens[pointer], lnode, Expression()); return n; } } } }; ... ASTGen AST(Tokens); Node* Tree = AST.Statement(); cout << Tree->Right->Data.svalue << endl; I can access Tree->Data.svalue and get the = Node's token info, so I know that node is getting spawned, and I can also get Tree->Left->Data.svalue and get the variable to the left of the = I have re-written it many times trying out different methods for stepping through the vector, but I always get a segmentation fault when I try to access the = right node (which should be the + node) Any help would be greatly appreciated.

    Read the article

  • Comet with multiple channels

    - by mark_dj
    Hello, I am writing an web app which needs to subscribe to multiple channels via javascript. I am using Atmosphere and Jersey as backend. However the jQuery plugin they work with only supports 1 channel. I've start buidling my own implementation. Now it works oke, but when i try to subscribe to 2 channels only 1 channel gets notified. Is the XMLHttpRequest blocking the rest of the XMLHttpRequests? Here's my code: function AtmosphereComet(url) { this.Connected = new signals.Signal(); this.Disconnected = new signals.Signal(); this.NewMessage = new signals.Signal(); var xhr = null; var self = this; var gotWelcomeMessage = false; var readPosition; var url = url; var onIncomingXhr = function() { if (xhr.readyState == 3) { if (xhr.status==200) // Received a message { var message = xhr.responseText; console.log(message); if(!gotWelcomeMessage && message.indexOf("") -1) { gotWelcomeMessage = true; self.Connected.dispatch(sprintf("Connected to %s", url)); } else { self.NewMessage.dispatch(message.substr(readPosition)); } readPosition = this.responseText.length; } } else if (this.readyState == 4) { self.disconnect(); } } var getXhr = function() { if ( window.location.protocol !== "file:" ) { try { return new window.XMLHttpRequest(); } catch(xhrError) {} } try { return new window.ActiveXObject("Microsoft.XMLHTTP"); } catch(activeError) {} } this.connect = function() { xhr = getXhr(); xhr.onreadystatechange = onIncomingXhr; xhr.open("GET", url, true); xhr.send(null); } this.disconnect = function() { xhr.onreadystatechange = null; xhr.abort(); } this.send = function(message) { } } And the test code: var connection1 = AtmosphereConnection("http://192.168.1.145:9999/botenveiling/b/product/status/2", AtmosphereComet); var connection2 = AtmosphereConnection("http://192.168.1.145:9999/botenveiling/b/product/status/1", AtmosphereComet); var output = function(msg) { alert(output); }; connection1.NewMessage.add(output); connection2.NewMessage.add(output); connection1.connect(); In AtmosphereConnection I instantiate the given AtmosphereComet with "new". I iterate over the object to check if it has to methods: "send", "connect", "disconnect". The reason for this is that i can switch the implementation later on when i complete the websocket implementation :) However I think the problem rests with the XmlHttpRequest object, or am i mistaken? P.S.: signals.Signal is a js observer/notifier library: http://millermedeiros.github.com/js-signals/ Testing: Firefox 3.6.1.3

    Read the article

  • meteor mongodb _id changing after insert (and UUID property as well)

    - by lommaj
    I have meteor method that does an insert. Im using Regulate.js for form validation. I set the game_id field to Meteor.uuid() to create a unique value that I also route to /game_show/:game_id using iron router. As you can see I'm logging the details of the game, this works fine. (image link to log below) Meteor.methods({ create_game_form : function(data){ Regulate.create_game_form.validate(data, function (error, data) { if (error) { console.log('Server side validation failed.'); } else { console.log('Server side validation passed!'); // Save data to database or whatever... //console.log(data[0].value); var new_game = { game_id: Meteor.uuid(), name : data[0].value, game_type: data[1].value, creator_user_id: Meteor.userId(), user_name: Meteor.user().profile.name, created: new Date() }; console.log("NEW GAME BEFORE INSERT: ", new_game); GamesData.insert(new_game, function(error, new_id){ console.log("GAMES NEW MONGO ID: ", new_id) var game_data = GamesData.findOne({_id: new_id}); console.log('NEW GAME AFTER INSERT: ', game_data); Session.set('CURRENT_GAME', game_data); }); } }); } }); All of the data coming out of the console.log at this point works fine After this method call the client routes to /game_show/:game_id Meteor.call('create_game_form', data, function(error){ if(error){ return alert(error.reason); } //console.log("post insert data for routing variable " ,data); var created_game = Session.get('CURRENT_GAME'); console.log("Session Game ", created_game); Router.go('game_show', {game_id: created_game.game_id}); }); On this view, I try to load the document with the game_id I just inserted Template.game_start.helpers({ game_info: function(){ console.log(this.game_id); var game_data = GamesData.find({game_id: this.game_id}); console.log("trying to load via UUID ", game_data); return game_data; } }); sorry cant upload images... :-( https://www.evernote.com/shard/s21/sh/c07e8047-de93-4d08-9dc7-dae51668bdec/a8baf89a09e55f8902549e79f136fd45 As you can see from the image of the console log below, everything matches the id logged before insert the id logged in the insert callback using findOne() the id passed in the url However the mongo ID and the UUID I inserted ARE NOT THERE, the only document in there has all the other fields matching except those two! Not sure what im doing wrong. Thanks!

    Read the article

  • Why isnt my data persisting with nskeyedarchiver?

    - by aking63
    Im just working on what should be the "finishing touches" of my first iPhone game. For some reason, when I save with NSKeyedArchiver/Unarchiver, the data seems to load once and then gets lost or something. Here's what I've been able to deduce: When I save in this viewController, pop to the previous one, and then push back into this one, the data is saved and prints as I want it to. But when I save in this viewController, then push a new one and pop back into this one, the data is lost. Any idea why this might be happening? Do I have this set up all wrong? I copied it from a book months ago. Here's the methods I use to save and load. - (void) saveGameData { NSLog(@"LS:saveGameData"); // SAVE DATA IMMEDIATELY NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDirectory = [paths objectAtIndex:0]; NSString *gameStatePath = [documentsDirectory stringByAppendingPathComponent:@"gameState.dat"]; NSMutableData *gameSave= [NSMutableData data]; NSKeyedArchiver *encoder = [[NSKeyedArchiver alloc] initForWritingWithMutableData:gameSave]; [encoder encodeObject:categoryLockStateArray forKey:kCategoryLockStateArray]; [encoder encodeObject:self.levelsPlist forKey:@"levelsPlist"]; [encoder finishEncoding]; [gameSave writeToFile:gameStatePath atomically:YES]; NSLog(@"encoded catLockState:%@",categoryLockStateArray); } - (void) loadGameData { NSLog(@"loadGameData"); // If there is a saved file, perform the load NSMutableData *gameData = [NSData dataWithContentsOfFile:[[NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0] stringByAppendingPathComponent:@"gameState.dat"]]; // LOAD GAME DATA if (gameData) { NSLog(@"-Loaded Game Data-"); NSKeyedUnarchiver *unarchiver = [[NSKeyedUnarchiver alloc] initForReadingWithData:gameData]; self.levelsPlist = [unarchiver decodeObjectForKey:@"levelsPlist"]; categoryLockStateArray = [unarchiver decodeObjectForKey:kCategoryLockStateArray]; NSLog(@"decoded catLockState:%@",categoryLockStateArray); } // CREATE GAME DATA else { NSLog(@"-Created Game Data-"); self.levelsPlist = [[NSMutableDictionary alloc] initWithContentsOfFile:[[NSBundle mainBundle] pathForResource:kLevelsPlist ofType:@"plist"]]; } if (!categoryLockStateArray) { NSLog(@"-Created categoryLockStateArray-"); categoryLockStateArray = [[NSMutableArray alloc] initWithCapacity:[[self.levelsPlist allKeys] count]]; for (int i=0; i<[[self.levelsPlist allKeys] count]; i++) { [categoryLockStateArray insertObject:[NSNumber numberWithBool:FALSE] atIndex:i]; } } // set the properties of the categories self.categoryNames = [self.levelsPlist allKeys]; NUM_CATEGORIES = [self.categoryNames count]; thisCatCopy = [[NSMutableDictionary alloc] initWithDictionary:[[levelsPlist objectForKey:[self.categoryNames objectAtIndex:pageControl.currentPage]] mutableCopy]]; NUM_FINISHED = [[thisCatCopy objectForKey:kNumLevelsBeatenInCategory] intValue]; }

    Read the article

  • Wierdness debugging Visual Studio C++ 2008

    - by Jeff Dege
    I have a legacy C++ app, that in its most incarnation we've been building with makefiles and VS2003's command-line tool. I'm trying to get it to build using VS2008 and MsBuild. The build is working OK, but I'm getting errors where I'd never seen errors, before, and stepping through in VS2008's debugger only confuses me. The app links a number of static libraries, which fall into two categories: those that are part of the same application suite, and those that are shared between a number of application suites. Originally, I had a .csproj file for each static library, and two .sln files, one for the application suite (including the suite-specific libraries) and one for the non-suite-specific shared libraries. The shared libraries were included in the link, their projects were not included in the application suite .sln. The application instantiates an object from a class that is defined in one of the shared libraries. The class has a member object of a class that wraps a linked list. The constructor of the linked list class sets its "head" pointer to null. When I run the app, and try to add an element to the linked list, I get an error - the head pointer contains the value 0xCCCCCCCC. So I step through with the debugger. And see weirdness. When the current line in the debugger is in a source file belonging to the static library, the head pointer contains 0x00000000. When I step into the constructor, I can see the pointer being set to that value, and when I'm stepped into any other method of the class, I can see that the head pointer still contains 0x00000000. But when I step out into methods that are defined in the application suite .sln, it contains 0xCCCCCCCC. It's not like it's being overwritten. It changes back and forth depending upon which source file I am currently debugging. So I included the shared library's project in the application suite .sln, and now I see the head pointer containing 0xCCCCCCCC all the time. It looks like the constructor of the linked list class is not being called. So now, I'm entirely confused. Anyone have any ideas?

    Read the article

  • What web platform is right for me?

    - by egervari
    I've been looking at web frameworks like Rails, Grails, etc. I'm used to doing applications in Spring Framework with Hibernate... and I want something more productive. One of the things I realized is that while some of the things in Grails is sexy, there are some serious problems with it. Grails' controllers: 1) are implemented awfully. They don't seem to be able to extend from super classes at runtime. I tried this to add base actions and helper methods, and this seems to cause grails to blow up. 2) are based on an obsolete request parameters model (rather than form backing objects, which are much nicer). 3) are hard to test. Command objects are treated totally differently... and it's actually MUCH harder to write the test than it is to write the controller code. 4) Command objects operate totally differently. They are pre-validated and bound, which causes a lot of inconsistencies than basic parameter model. 5) Command objects are not reusable, and it's a pain in the rear to reuse most of the stuff from the domain classes, like constraints and fields. This is TRIVIAL to do in basic Spring. Why the hell was it not trivial to do in Grails? 6) The scaffolding that is generated is pure crap. It doesn't generalize inserts and updates... and it actually copy/pastes a pile of code in two views: create.gsp and edit.gsp. The views themselves are gargantuan piles of doggie do-do. This is further compounded by the fact that it uses low-level parameters and not objects. Integration tests are 30x slower than a Spring integration test. It is disgusting. Some mocking tests are so hard to write and aren't guaranteed to work when it's deployed, that I think it discourages fast, tdd test cycles. Most things seem to screw up grails while it's running, like adding a taglib, or anything really. The server restart problem wasn't solved at all. I'm starting to think going with Spring/Hibernate/Java is the only way to go. While there is a pretty big cost at startup, I know it'll eventually smooth out. It sucks I can't use a language like Scala... because idiomatically, it is so incompatible with Hibernate. This app is also not a run-of-the-mill UI over a database. It's got some of that, but it's not going to be a slouch. I am deathly scared of Grails now because of how crap it is in the Controller layer. Suggestions on what I can do?

    Read the article

  • How to page multiple data sets in ASP.NET MVC

    - by REA_ANDREW
    On a single view I will have three sets of paged data. Which means for each model I will have The Objects The Page Index The Page Size My initial thought was for example: public class PagedModel<T> where T:class { public IList<T> Objects { get; set; } public int ModelPageIndex { get; set; } public int ModelPageSize { get; set; } } Then having a model which is to be supplied to the action as for example: public class TypesViewModel { public PagedModel<ObjectA> Types1 { get; set; } public PagedModel<ObjectB> Typed2 { get; set; } public PagedModel<ObjectC> Types3 { get; set; } } So if I then for example have the Index view inherit from the type: System.Web.Mvc.ViewPage<uk.co.andrewrea.forum.Web.Models.TypesViewModel> Now my initial aciton method for the index is simply: public ActionResult Index() { var forDisplayPurposes = new TypesViewModel(); return View(forDisplayPurposes); } If I then want to page, it is here where I am struggling to decide which action to take. Lets say that I select the next page of the Types2 PageModel. What should the action look like for this in order to return the new view showing the second page of the Types2 PageModel I was thinking possibly to duplicate the action but use it with POST [AcceptVerbs(HttpVerbs.Post)] public ActionResult Index(TypesViewModel model) { return View(model); } Is this a good way to approach it. I understand there is always Session, but I was just wondering how such a thing is achieved currently out there. If any best methods have been mutually accepted and things. So simply, one page with multiple paged models. How to persist the data for each using a wrapper model. Which way should you pass in the model and which way should you page the data, i.e. Form Post Lastly, I have seen the routes take this into account i.e. {controller}/{action}/{id}/{pageindex}/{pagesize} but this only accounts for one model and I do not really wwant to repeat the pagesize and pageindex values for the number of models I have inside the wrapper model. Thanks for your time!! Andrew

    Read the article

  • Store Observer not being called always

    - by Nixarn
    Has anyone else here experienced problems with their Store Observer class not being called always when the user for instance cancels a request (or purchases something) We just had our update that brought in app purchases go live last night, and before that we had obviously tested everything tons of times against the Sandbox and everything was working fine. Now however, when the update went live in a real environment we keep getting issues with the store. For instance, in a freshly booted iPhone / iPod, the first time you run the app, if you then try to make a purchase and then immediately cancel it from the first dialog, it seems as if the callback for the cancel is not getting called. If you then restart the app it seems as if it always works after that, or at least. Same thing with other callbacks, seems as if our store observer isn't listening as the callbacks aren't being registered on the phone. One example of this is if you purchase something, then nothing will happen (if this is the first time the app is launched at least). You get the purchase successful dialog from the app store but it seems as if our own code isn't called. If you then quit the app and restart it the callback gets called. Same problem happens if you for instance try to start a request to download all previous purchases and then immediately cancel it as the first dialog pops up, if you do that then the callback for a failed restore is not called, until you then restart the app and try it again, then it always seems to work. The way we have implemented our store observer is by creating a custom class that's implements the SKPaymentTransactionObserver interface. @interface StoreObserver : NSObject<SKPaymentTransactionObserver> In the class we have implemented the following methods: - (void)paymentQueue:(SKPaymentQueue *)queue updatedTransactions:(NSArray *)transactions - (void)paymentQueueRestoreCompletedTransactionsFinished:(SKPaymentQueue *)queue - (void)paymentQueue:(SKPaymentQueue *)queue restoreCompletedTransactionsFailedWithError:(NSError *)error The way our restore process works is that if you tap on the button that allows you to download all we simply run the restoreCompletedTransactions code as follows: [[SKPaymentQueue defaultQueue] restoreCompletedTransactions]; However, the callback, restoreCompletedTransactionsFailedWithError, which has been implemented in the store observer, does not always get called when we try to cancel the request. This happens when you boot the iPhone / iPod and try this for the first time. If you after that restart the app everything works fine. The StoreObserver class is created when our app is launched, just by running the following code: pStoreObserver = [[StoreObserver alloc] init]; [[SKPaymentQueue defaultQueue] addTransactionObserver:pStoreObserver]; Has anyone else had any similar experiences? Or does anyone have any suggestions on how to solve this? As I said, in the sandbox environment everything was working fine, no issues whatsoever, but now once it's gone live we're experiencing these.

    Read the article

  • Best Practices for Working with Multiple Monitors in Visual Studio 2010

    - by Clever Human
    Now that Visual Studio 2010 has support for multiple monitors, I am curious how other people have their environments arranged. I have yet to come up with an arrangement that I am really satisfied with. The current best I have come up with for my 2 monitor system is to have all code windows detached. Then, on my primary monitor, I am able to have two code windows side by side (using the Windows 7 keyboard shortcuts WinKey+LeftArrow and WinKey+RightArrow.) On my secondary monitor I put the rest of the IDE with all of the tool windows that are normally on the bottom (errors list, find window, call stack, etc...) docked where the code windows normally go. I've also tried having all those things detached and having almost nothing in the IDE proper. The problems with this layout are: Newly opened code windows always open in the IDE, not on top of one of the detached windows. Detached code windows do not remember their exact placement from session to session (they are slightly off, having me to use the winkey + arrow key shortcut again and again for each window. There seems to be no way to have the code panes aware that they are on top of one another (IE -- multiple tabs.) The CTRL+TAB shortcut always displays on top of the IDE proper. The Code Panes are always "on top" of (children of) the IDE. So clicking on any code pane brings the IDE to the foreground, even when I care only about that code pane, and not the IDE. Other more minor issues... What would go a long way to making this better is having the code panes detach such that they are tab strips that can have other code panes docked within them. The new multi-monitor support in VS2010 is good, but it still seems really lacking. Can these issues be solved with an add-in? If so, is anyone aware of one? Is there a better way to work with the IDE on multiple monitors than what I am doing? NOTE: While this question is subjective (there is certainly no "this is the best way and that's final" answer) I'd really like to know possibly better methods of working with the IDE than what I have come up with. The intent is not to start a "mine's best" flame war.

    Read the article

  • How to reliably categorize HTTP sessions in proxy to corresponding browser' windows/tabs user is viewing?

    - by Jehonathan
    I was using the Fiddler core .Net library as a local proxy to record the user activity in web. However I ended up with a problem which seems dirty to solve. I have a web browser say Google Chrome, and the user opened like 10 different tabs each with different web URLs. The problem is that the proxy records all the HTTP session initiated by each pages separately, causing me to figure out using my intelligence the tab which the corresponding HTTP session belonged to. I understand that this is because of the stateless nature of HTTP protocol. However I am just wondering is there an easy way to do this? I ended up with below c# code for that in Fiddler. Still its not a reliable solution due to the heuristics. This is a modification of the sample project bundled with Fiddler core for .NET 4. Basically what it does is filtering HTTP sessions initiated in last few seconds to find the first request or switching to another page made by the same tab in browser. It almost works, but not seems to be a universal solution. Fiddler.FiddlerApplication.AfterSessionComplete += delegate(Fiddler.Session oS) { //exclude other HTTP methods if (oS.oRequest.headers.HTTPMethod == "GET" || oS.oRequest.headers.HTTPMethod == "POST") //exclude other HTTP Status codes if (oS.oResponse.headers.HTTPResponseStatus == "200 OK" || oS.oResponse.headers.HTTPResponseStatus == "304 Not Modified") { //exclude other MIME responses (allow only text/html) var accept = oS.oRequest.headers.FindAll("Accept"); if (accept != null) { if(accept.Count>0) if (accept[0].Value.Contains("text/html")) { //exclude AJAX if (!oS.oRequest.headers.Exists("X-Requested-With")) { //find the referer for this request var referer = oS.oRequest.headers.FindAll("Referer"); //if no referer then assume this as a new request and display the same if(referer!=null) { //if no referer then assume this as a new request and display the same if (referer.Count > 0) { //lock the sessions Monitor.Enter(oAllSessions); //filter further using the response if (oS.oResponse.MIMEType == string.Empty || oS.oResponse.MIMEType == "text/html") //get all previous sessions with the same process ID this session request if(oAllSessions.FindAll(a=>a.LocalProcessID == oS.LocalProcessID) //get all previous sessions within last second (assuming the new tab opened initiated multiple sessions other than parent) .FindAll(z => (z.Timers.ClientBeginRequest > oS.Timers.ClientBeginRequest.AddSeconds(-1))) //get all previous sessions that belongs to the same port of the current session .FindAll(b=>b.port == oS.port ).FindAll(c=>c.clientIP ==oS.clientIP) //get all previus sessions with the same referrer URL of the current session .FindAll(y => referer[0].Value.Equals(y.fullUrl)) //get all previous sessions with the same host name of the current session .FindAll(m=>m.hostname==oS.hostname).Count==0 ) //if count ==0 that means this is the parent request Console.WriteLine(oS.fullUrl); //unlock sessions Monitor.Exit(oAllSessions); } else Console.WriteLine(oS.fullUrl); } else Console.WriteLine(oS.fullUrl); Console.WriteLine(); } } } } };

    Read the article

  • StreamWriter appends random data

    - by void
    Hi I'm seeing odd behaviour using the StreamWriter class writing extra data to a file using this code: public void WriteToCSV(string filename) { StreamWriter streamWriter = null; try { streamWriter = new StreamWriter(filename); Log.Info("Writing CSV report header information ... "); streamWriter.WriteLine("\"{0}\",\"{1}\",\"{2}\",\"{3}\"", ((int)CSVRecordType.Header).ToString("D2", CultureInfo.CurrentCulture), m_InputFilename, m_LoadStartDate, m_LoadEndDate); int recordCount = 0; if (SummarySection) { Log.Info("Writing CSV report summary section ... "); foreach (KeyValuePair<KeyValuePair<LoadStatus, string>, CategoryResult> categoryResult in m_DataLoadResult.DataLoadResults) { streamWriter.WriteLine("\"{0}\",\"{1}\",\"{2}\",\"{3}\"", ((int)CSVRecordType.Summary).ToString("D2", CultureInfo.CurrentCulture), categoryResult.Value.StatusString, categoryResult.Value.Count.ToString(CultureInfo.CurrentCulture), categoryResult.Value.Category); recordCount++; } } Log.Info("Writing CSV report cases section ... "); foreach (KeyValuePair<KeyValuePair<LoadStatus, string>, CategoryResult> categoryResult in m_DataLoadResult.DataLoadResults) { foreach (CaseLoadResult result in categoryResult.Value.CaseLoadResults) { if ((LoadStatus.Success == result.Status && SuccessCases) || (LoadStatus.Warnings == result.Status && WarningCases) || (LoadStatus.Failure == result.Status && FailureCases) || (LoadStatus.NotProcessed == result.Status && NotProcessedCases)) { streamWriter.Write("\"{0}\",\"{1}\",\"{2}\",\"{3}\",\"{4}\"", ((int)CSVRecordType.Result).ToString("D2", CultureInfo.CurrentCulture), result.Status, result.UniqueId, result.Category, result.ClassicReference); if (RawResponse) { streamWriter.Write(",\"{0}\"", result.ResponseXml); } streamWriter.WriteLine(); recordCount++; } } } streamWriter.WriteLine("\"{0}\",\"{1}\"", ((int)CSVRecordType.Count).ToString("D2", CultureInfo.CurrentCulture), recordCount); Log.Info("CSV report written to '{0}'", fileName); } catch (IOException execption) { string errorMessage = string.Format(CultureInfo.CurrentCulture, "Unable to write XML report to '{0}'", fileName); Log.Error(errorMessage); Log.Error(exception.Message); throw new MyException(errorMessage, exception); } finally { if (null != streamWriter) { streamWriter.Close(); } } } The file produced contains a set of records on each line 0 to N, for example: [Record Zero] [Record One] ... [Record N] However the file produced either contains nulls or incomplete records from further up the file appended to the end. For example: [Record Zero] [Record One] ... [Record N] [Lots of nulls] or [Record Zero] [Record One] ... [Record N] [Half complete records] This also happens in separate pieces of code that also use the StreamWriter class. Furthermore, the files produced all have sizes that are multiples of 1024. I've been unable to reproduce this behaviour on any other machine and have tried recreating the environment. Previous versions of the application didn't exhibite this behaviour despite having the same code for the methods in question. EDIT: Added extra code.

    Read the article

  • php regex guitar tab (tabs or tablature, a type of music notation)

    - by John
    I am in the process of creating a guitar tab to rtttl (Ring Tone Text Transfer Language) converter in PHP. In order to prepare a guitar tab for rtttl conversion I first strip out all comments (comments noted by #- and ended with -#), I then have a few lines that set tempo, note the tunning and define multiple instruments (Tempo 120\nDefine Guitar 1\nDefine Bass 1, etc etc) which are stripped out of the tab and set aside for later use. Now I essentially have nothing left except the guitar tabs. Each tab is prefixed with it's instrument name in conjunction with the instrument name noted prior. Some times we have tabs for 2 separate instruments that are linked because they are to be played together, ie a Guitar and a Bass Guitar playing together. Example 1, Standard Guitar Tab: |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| Example 2, Conjunction Tab: |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| | | |Bass 1 G|----------0-------0-----------0-------0--------| D|--------2-----------2-------2-----------2------| A|------3---------------3---3---------------3----| E|----3-------------------3-------------------3--| I have considered other methods of identifying the tabs with no solid results. I am hoping that someone who does regular expressions could help me find a way to identify a single guitar tab and if possible also be able to match a tab with multiple instruments linked together. Once the tabs are in an array I will go through them one line at a time and convert them into rtttl lines (exploded at each new line "\n"). I do not want to separate the guitar tabs in the document via explode "\n\n" or something similar because it does not identify the guitar tab, rather, it is identifying the space between the tabs - not on the tabs themselves. I have been messing with this for about a week now and this is the only major hold up I have. Everything else is fairly simple. As of current, I have tried many variations of the regex pattern. Here is one of the most recent test samples: <?php $t = " |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| |Guitar 1 e|--------------3-------------------3------------| B|------------3---3---------------3---3----------| G|----------0-------0-----------0-------0--------| D|--------0-----------0-------0-----------0------| A|------2---------------2---2---------------2----| E|----3-------------------3-------------------3--| | | |Bass 1 G|----------0-------0-----------0-------0--------| D|--------2-----------2-------2-----------2------| A|------3---------------3---3---------------3----| E|----3-------------------3-------------------3--| "; preg_match_all("/^.*?(\\|).*?(\\|)/is",$t,$p); print_r($p); ?> It is also worth noting that inside the tabs, where the dashes and #'s are, you may also have any variation of letters, numbers and punctuation. The beginning of each line marks the tuning of each string with one of the following case insensitive: a,a#,b,c,c#,d,d#,e,f,f#,g or g. Thanks in advance for help with this most difficult problem.

    Read the article

  • How can I link axes of imshow plots for zooming and panning?

    - by Adam Fraser
    Suppose I have a figure canvas with 3 plots... 2 are images of the same dimensions plotted with imshow, and the other is some other kind of subplot. I'd like to be able to link the x and y axes of the imshow plots so that when I zoom in one (using the zoom tool provided by the NavigationToolbar), the other zooms to the same coordinates, and when I pan in one, the other pans as well. Subplot methods such as scatter and histogram can be passed kwargs specifying an axes for sharex and sharey, but imshow has no such configuration. I started hacking my way around this by subclassing NavigationToolbar2WxAgg (shown below)... but there are several problems here. 1) This will link the axes of all plots in a canvas since all I've done is get rid of the checks for a.in_axes() 2) This worked well for panning, but zooming caused all subplots to zoom from the same global point, rather than from the same point in each of their respective axes. Can anyone suggest a workaround? Much thanks! -Adam from matplotlib.backends.backend_wxagg import NavigationToolbar2WxAgg class MyNavToolbar(NavigationToolbar2WxAgg): def __init__(self, canvas, cpfig): NavigationToolbar2WxAgg.__init__(self, canvas) # overrided # As mentioned in the code below, the only difference here from overridden # method is that this one doesn't check a.in_axes(event) when deciding which # axes to start the pan in... def press_pan(self, event): 'the press mouse button in pan/zoom mode callback' if event.button == 1: self._button_pressed=1 elif event.button == 3: self._button_pressed=3 else: self._button_pressed=None return x, y = event.x, event.y # push the current view to define home if stack is empty if self._views.empty(): self.push_current() self._xypress=[] for i, a in enumerate(self.canvas.figure.get_axes()): # only difference from overridden method is that this one doesn't # check a.in_axes(event) if x is not None and y is not None and a.get_navigate(): a.start_pan(x, y, event.button) self._xypress.append((a, i)) self.canvas.mpl_disconnect(self._idDrag) self._idDrag=self.canvas.mpl_connect('motion_notify_event', self.drag_pan) # overrided def press_zoom(self, event): 'the press mouse button in zoom to rect mode callback' if event.button == 1: self._button_pressed=1 elif event.button == 3: self._button_pressed=3 else: self._button_pressed=None return x, y = event.x, event.y # push the current view to define home if stack is empty if self._views.empty(): self.push_current() self._xypress=[] for i, a in enumerate(self.canvas.figure.get_axes()): # only difference from overridden method is that this one doesn't # check a.in_axes(event) if x is not None and y is not None and a.get_navigate() and a.can_zoom(): self._xypress.append(( x, y, a, i, a.viewLim.frozen(), a.transData.frozen())) self.press(event)

    Read the article

  • Is it any loose coupling mechanism in Objective-C + Cocoa like C# delegates or C++Qt signals+slots?

    - by Eye of Hell
    Hello. For a large programs, the standard way to chalenge a complexity is to divide a program code into small objects. Most of the actual programming languages offer this functionality via classes, so is Objective-C. But after source code is separated into small object, the second challenge is to somehow connect them with each over. Standard approaches, supported by most languages are compositon (one object is a member field of another), inheritance, templates (generics) and callbacks. More cryptic techniques include method-level delagates (C#) and signals+slots (C++Qt). I like the delegates / signals idea, since while connecting two objects i can connect individual methods with each over, without objects knowing anything of each over. For C#, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); object1.SomethingHappened += object2.HandleSomething; In this code, is object1 calls it's SomethingHappened delegate (like a normal method call) the HandleSomething method of object2 will be called. For C++Qt, it will look like this: var object1 = new CObject1(); var object2 = new CObject2(); connect( object1, SIGNAL(SomethingHappened()), object2, SLOT(HandleSomething()) ); The result will be exactly the same. This technique has some advantages and disadvantages, but generally i like it more than interfaces since if program code base grows i can change connections and add new ones without creating tons of interfaces. After examination of Objective-C i havn't found any way to use this technique i like :(. It seems that Objective-C supports message passing perfectly well, but it requres for object1 to have a pointer to object2 in order to pass it a message. If some object needs to be connected to lots of other objects, in Objective-C i will be forced to give him pointers to each of the objects it must be connected. So, the question :). Is it any approach in Objective-C programming that will closely resemble delegate / signal+slot types of connection, not a 'give first object an entire pointer to second object so it can pass a message to it'. Method-level connections are a bit more preferable to me than object-level connection ^_^.

    Read the article

  • Visual Studio 2010 and Test Driven Development

    - by devoured elysium
    I'm making my first steps in Test Driven Development with Visual Studio. I have some questions regarding how to implement generic classes with VS 2010. First, let's say I want to implement my own version of an ArrayList. I start by creating the following test (I'm using in this case MSTest): [TestMethod] public void Add_10_Items_Remove_10_Items_Check_Size_Is_Zero() { var myArrayList = new MyArrayList<int>(); for (int i = 0; i < 10; ++i) { myArrayList.Add(i); } for (int i = 0; i < 10; ++i) { myArrayList.RemoveAt(0); } int expected = 0; int actual = myArrayList.Size; Assert.AreEqual(expected, actual); } I'm using VS 2010 ability to hit ctrl + . and have it implement classes/methods on the go. I have been getting some trouble when implementing generic classes. For example, when I define an .Add(10) method, VS doesn't know if I intend a generic method(as the class is generic) or an Add(int number) method. Is there any way to differentiate this? The same can happen with return types. Let's assume I'm implementing a MyStack stack and I want to test if after I push and element and pop it, the stack is still empty. We all know pop should return something, but usually, the code of this test shouldn't care for it. Visual Studio would then think that pop is a void method, which in fact is not what one would want. How to deal with this? For each method, should I start by making tests that are "very specific" such as is obvious the method should return something so I don't get this kind of ambiguity? Even if not using the result, should I have something like int popValue = myStack.Pop() ? How should I do tests to generic classes? Only test with one generic kind of type? I have been using ints, as they are easy to use, but should I also test with different kinds of objects? How do you usually approach this? I see there is a popular tool called TestDriven for .NET. With VS 2010 release, is it still useful, or a lot of its features are now part of VS 2010, rendering it kinda useless? Thanks

    Read the article

< Previous Page | 412 413 414 415 416 417 418 419 420 421 422 423  | Next Page >