Search Results

Search found 4240 results on 170 pages for 'thoughts'.

Page 142/170 | < Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >

  • Telerik RadGrid: grid clientside pagination

    - by ram
    I have a web service which returns me some data,I am massaging this data and using this as datasource for my radgrid (telerik). The datasource is quite large, and would like to paginate it. I found couple of problems when I paginate it in the server side I have to bind the grid again for pagination, which essentially means I have to make a call to WS again to get the data. This is an expensive call for me. I would rather forgo the benefits of pagination and would display all the results in the same page, except for it would be a bit clumsy During the postback RadGrid1.Items.Count happens to be the number of items getting paginated (25- in my case) which is expected as all the items in the datasource are not getting bound. This of course is not an issue. The real issue is that we have some checkboxes which get checked based on some business condition. We add this to our business object/DB later. So if the user has not navigated all the pages, these "checked" items do not get added as pagination limits the "Items" in the grid to those which get bound for that particular page index. My Thoughts: I would rather have some sort of client side pagination, where we can hide/show contents than going to the server and doing a databind every time. Though it will return all the results, the UI will not be clumsy and the grid would have "all the items" during postback Is there a way to do it ? If it were a regular asp.net gridView, can someone point me to a good article which would serve my purpose Ram PS: who else think radgrid is crazy ? (unfortunately I did not make this choice)

    Read the article

  • Magento: Attribute always returns default value in catalog view, works fine in product view

    - by colinodell
    I've created a new Yes/No attribute for products. I've extended the Product model to do some custom logic and the custom functions are working everywhere. When I initially tried getting the custom attribute value, I ran into some issue. Magento wasn't loading it for me, so calls to $product-getMyAttributeName() did nothing. In the product views, I got it working with this additional function: public function getAttrVal($attr_name) { return $this->getResource()->getAttribute($attr_name)->getFrontend()->getValue($this); } So that worked great when viewing a product on the frontend. It would fetch the assigned value if set, or the default if not. When I view any Category (grid of all products in that category), the same exact code is being executed. But my getAttrVal() function always returns the default value, even if I've explicitly set this value for my product. I can't, for the life of me, figure out why the attribute loads correctly in the Product view but the Category view always grabs the default value, despite running the same exact code. Any thoughts?

    Read the article

  • Building A True Error Handler

    - by Kevin Pirnie
    I am trying to build an error handler for my desktop application. The code Is in the class ZipCM.ErrorManager listed below. What I am finding is that the outputted file is not giving me the correct info for the StackTrace. Here is how I am trying to use it: Try '... Some stuff here! Catch ex As Exception Dim objErr As New ZipCM.ErrorManager objErr.Except = ex objErr.Stack = New System.Diagnostics.StackTrace(True) objErr.Location = "Form: SelectSite (btn_SelectSite_Click)" objErr.ParseError() objErr = Nothing End Try Here is the class: Imports System.IO Namespace ZipCM Public Class ErrorManager Public Except As Exception Public Location As String Public Stack As System.Diagnostics.StackTrace Public Sub ParseError() Dim objFile As New StreamWriter(Common.BasePath & "error_" & FormatDateTime(DateTime.Today, DateFormat.ShortDate).ToString().Replace("\", "").Replace("/", "") & ".log", True) With objFile .WriteLine("-------------------------------------------------") .WriteLine("-------------------------------------------------") .WriteLine("An Error Occured At: " & DateTime.Now) .WriteLine("-------------------------------------------------") .WriteLine("LOCATION:") .WriteLine(Location) .WriteLine("-------------------------------------------------") .WriteLine("FILENAME:") .WriteLine(Stack.GetFrame(0).GetFileName()) .WriteLine("-------------------------------------------------") .WriteLine("LINE NUMBER:") .WriteLine(Stack.GetFrame(0).GetFileLineNumber()) .WriteLine("-------------------------------------------------") .WriteLine("SOURCE:") .WriteLine(Except.Source) .WriteLine("-------------------------------------------------") .WriteLine("MESSAGE:") .WriteLine(Except.Message) .WriteLine("-------------------------------------------------") .WriteLine("DATA:") .WriteLine(Except.Data.ToString()) End With objFile.Close() objFile = Nothing End Sub End Class End Namespace What is happenning is the .GetFileLineNumber() is getting the line number from 'objErr.Stack = New System.Diagnostics.StackTrace(True)' inside my Try..Catch block. In fact, it's the exact line number that is on. Any thoughts of what is going on here, and how I can catch the real line number the error is occuring on?

    Read the article

  • Routing redirection decision

    - by programming late night
    I have really no idea why I'm asking this as this a really completely irrelevant question for which I should have figured out an answer within milliseconds, yet I'm doing it. So in my project I have a Router class which splits up the request and selects the right page to be loaded. Fine so far. Now I have a page displayed when the user requests a page that doesn't exist, you know, 404. So theoretically, if the user entered mydomain.com/404 (I use mod_rewrite with a requests collector via index.php?req=*) the 404 error would be shown to him, but in fact there was no error - the 404 page would be displayed as a perfectly normal page. So if someone would try out requesting the 404 page via /404, he would be shown the page but he can't tell if the 404 page he requested doesn't exist and he is actually getting a, you guessed it, 404 error or if he actually found some flaw in the system that makes him able to see an error page when there is no error. I don't know how dumb this whole thing here is but I'm sure some of you have in fact ran into this problem already. Short version: If the user enters mydomain.com/404 the 404 page is shown even though there is no 404 error. I know this is a completely irrelevant question, please don't tell me, but I just spontaneously wanted to hear your thoughts on it. Strange eh? Should I redirect direct access to my 404-page to the home page? Should I do nothing? Should I just go to bed and stop asking irrelevant stuff?

    Read the article

  • virtual methods and template classes

    - by soxs060389
    Hi I got over a problem, I think a very specific one. I've got 2 classes, a B aseclass and a D erived class (from B aseclass). B is a template class ( or class template) and has a pure virtual method virutal void work(const T &dummy) = 0; The D erived class is supposed to reimplement this, but as D is Derived from B rather than D being another template class, the compiler spits at me that virtual functions and templates don't work at once. Any ideas how to acomplish what I want? I am thankfull for any thoughts and Ideas, especially if you allready worked out that problem this class is fixed aka AS IS, I can not edit this without breaking existing code base template <typename T> class B { public: ... virtual void work(const T &dummy) = 0; .. }; take int* as an example class D : public B<int*>{ ... virtual void work(const int* &dummy){ /* put work code here */ } .. }; Edit: The compiler tells me, that void B<T>::work(const T&) [with T = int*] is pure virtual within D

    Read the article

  • How to extract the Sql Command from a Complied Linq Query

    - by Harry
    In normal (not compiled) Linq to Sql queries you can extract the SQLCommand from the IQueryable via the following code: SqlCommand cmd = (SqlCommand)table.Context.GetCommand(query); Is it possible to do the same for a compiled query? The following code provides me with a delegate to a compiled query: private static readonly Func<Data.DAL.Context, string, IQueryable<Word>> Query_Get = CompiledQuery.Compile<Data.DAL.Context, string, IQueryable<Word>>( (context, name) => from r in context.GetTable<Word>() where r.Name == name select r); When i use this to generate the IQueryable and attempt to extract the SqlCommand it doesn't seem to work. When debugging the code I can see that the SqlCommand returned has the 'very' useful CommandText of 'SELECT NULL AS [EMPTY]' using (var db = new Data.DAL.Context()) { IQueryable<Word> query = Query_Get(db, "word"); SqlCommand cmd = (SqlCommand)db.GetCommand(query); Console.WriteLine(cmd != null ? cmd.CommandText : "Command Not Found"); } I can't find anything in google about this particular scenario, as no doubt it is not a common thing to attempt... So.... Any thoughts?

    Read the article

  • WCF data services (OData), query with inheritance limitation?

    - by Mathieu Hétu
    Project: WCF Data service using internally EF4 CTP5 Code-First approach. I configured entities with inheritance (TPH). See previous question on this topic: Previous question about multiple entities- same table The mapping works well, and unit test over EF4 confirms that queries runs smoothly. My entities looks like this: ContactBase (abstract) Customer (inherits from ContactBase), this entity has also several Navigation properties toward other entities Resource (inherits from ContactBase) I have configured a discriminator, so both Customer and Resource map to the same table. Again, everythings works fine on the Ef4 point of view (unit tests all greens!) However, when exposing this DBContext over WCF Data services, I get: - CustomerBases sets exposed (Customers and Resources sets seems hidden, is it by design?) - When I query over Odata on Customers, I get this error: Navigation Properties are not supported on derived entity types. Entity Set 'ContactBases' has a instance of type 'CodeFirstNamespace.Customer', which is an derived entity type and has navigation properties. Please remove all the navigation properties from type 'CodeFirstNamespace.Customer'. Stacktrace: at System.Data.Services.Serializers.SyndicationSerializer.WriteObjectProperties(IExpandedResult expanded, Object customObject, ResourceType resourceType, Uri absoluteUri, String relativeUri, SyndicationItem item, DictionaryContent content, EpmSourcePathSegment currentSourceRoot) at System.Data.Services.Serializers.SyndicationSerializer.WriteEntryElement(IExpandedResult expanded, Object element, ResourceType expectedType, Uri absoluteUri, String relativeUri, SyndicationItem target) at System.Data.Services.Serializers.SyndicationSerializer.<DeferredFeedItems>d__b.MoveNext() at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteItems(XmlWriter writer, IEnumerable`1 items, Uri feedBaseUri) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeedTo(XmlWriter writer, SyndicationFeed feed, Boolean isSourceFeed) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteFeed(XmlWriter writer) at System.ServiceModel.Syndication.Atom10FeedFormatter.WriteTo(XmlWriter writer) at System.Data.Services.Serializers.SyndicationSerializer.WriteTopLevelElements(IExpandedResult expanded, IEnumerator elements, Boolean hasMoved) at System.Data.Services.Serializers.Serializer.WriteRequest(IEnumerator queryResults, Boolean hasMoved) at System.Data.Services.ResponseBodyWriter.Write(Stream stream) Seems like a limitation of WCF Data services... is it? Not much documentation can be found on the web about WCF Data services (OData) and inheritance specifications. How can I overpass this exception? I need these navigation properties on derived entities, and inheritance seems the only way to provide mapping of 2 entites on the same table with Ef4 CTP5... Any thoughts?

    Read the article

  • Conditional execution of EventTriggers in Silverlight 3

    - by Jason
    I'm currently working on the UI of a Silverlight application and need to be able to change the visual state of a control to one of two possible states based on it's current state when handling the same event trigger. For example: I have a control that sits partially in a clipping path, when I click the visible part of the control I want to change the state to "visible" and if I click it again when it is in its "visible" state I want to change to the "hidden" state. Example XAML: <i:Interaction.Triggers> <i:EventTrigger EventName="MouseLeftButtonUp"> <ic:GoToStateAction StateName="Visible"/> <ic:GoToStateAction StateName="Hidden"/> </i:EventTrigger> </i:Interaction.Triggers> Where "i" is "System.Windows.Interactivity;assembly=System.Windows.Interactivity" and "ic" is "Microsoft.Expression.Interactivity.Core;assembly=Microsoft.Expression.Interactions". I'm currently working in Expression Blend 3 and would prefer to have a XAML only solution but am not opposed to coding this if it is completely necessary. I have tried recording a change in the target state name in Blend but this did not work. Any thoughts on this?

    Read the article

  • Agile methodologies. Is it a by-product of mind control techniques as NLP / Scientology?

    - by Bobb
    The more I read about contemporary methods combinging scrum, tdd and xp, the more I feel like I already seen the methods. I am not arguing that agile approach is much more progressive than older rigid structures like waterfall, what I am saying is that it seems to me that agile methodologies are ideal to be used as a nest for a brainwashing business. I read few articles which kept referring to authors which I checked afterwards and they call themselves - coaches, trainers (usual thing when NLP specialists are involved) with no apparent software development history. Also I met a guy who is a scrum faciltator (term widly used in relation to scientology) in a high profile company. I talked to him less than 5 min but I got complete feeling that he is either on drugs or he has been programmed by a powerful NLP specialist. The way to talk and his body movements witnessed he is not an average normal person (in terms of normal distribution :))... Please dont get me wrong. I am not a fun of conspiracy theories. But I had an experience with a member of church of scientology tried to invade a commercial firm and actually went half way through to very top in just 3 weeks. I saw his work. For now I have complete impression is that psycho manipulators are now invading IT industry through the convenient door of agile techniques. Anyone has the same feeling/thoughts?

    Read the article

  • How to track the touch vector?

    - by mystify
    I need to calculate the direction of dragging a touch, to determine if the user is dragging up the screen, or down the screen. Actually pretty simple, right? But: 1) Finger goes down, you get -touchesBegan:withEvent: called 2) Must wait until finger moves, and -touchesMoved:withEvent: gets called 3) Problem: At this point it's dangerous to tell if the user did drag up or down. My thoughts: Check the time and accumulate calculates vectors until it's secure to tell the direction of touch. Easy? No. Think about it: What if the user holds the finger down for 5 minutes on the same spot, but THEN decides to move up or down? BANG! Your code would fail, because it tried to determine the direction of touch when the finger didn't move really. Problem 2: When the finger goes down and stays at the same spot for a few seconds because the user is a bit in the wind and thinks about what to do now, you'll get a lot of -touchesMoved:withEvent: calls very likely, but with very minor changes in touch location. So my next thought: Do the accumulation in -touchesMoved:withEvent:, but only if a certain threshold of movement has been exceeded. I bet you have some better concepts in place?

    Read the article

  • Best way to migrate export/import from SQL Server to oracle

    - by matao
    Hi guys! I'm faced with needing access for reporting to some data that lives in Oracle and other data that lives in a SQL Server 2000 database. For various reasons these live on different sides of a firewall. Now we're looking at doing an export/import from sql server to oracle and I'd like some advice on the best way to go about it... The procedure will need to be fully automated and run nightly, so that excludes using the SQL developer tools. I also can't make a live link between databases from our (oracle) side as the firewall is in the way. The data needs to be transformed in the process from a star schema to a de-normalised table ready for reporting. What I'm thinking about is writing a monster query for SQL Server (which I mostly have already) that will denormalise and read out the data from SQL Server into a flat file using the sql server equivalent of sqlplus as a scheduled task, dump into a Well Known Location, then on the oracle side have a cron job that copies down the file and loads it with sql loader and rebuilds indexes etc. This is all doable, but very manual. Is there one or a combination of FOSS or standard oracle/SQL Server tools that could automate this for me? the Irreducible complexity is the query on one side and building indexes on the other, but I would love to not have to write the CSV dumping detail or the SQL loader script, just say dump this view out to CSV on one side, and on the other truncate and insert into this table from CSV and not worry about mapping column names and all other arcane sqlldr voodoo... best practices? thoughts? comments? edit: I have about 50+ columns all of varying types and lengths in my dataset, which is why I'd prefer to not have to write out how to generate and map each single column...

    Read the article

  • Publishing WCF .NET 3.5 to IIS 5.1

    - by Adam
    I've been developing a WCF web service using .NET 3.5 with IIS7 and it works perfectly on my local computer. I tried publishing it to a server running IIS 5.1 and even though I can view the WSDL in my browser, the client application doesn't seem to be connecting to it correctly. I launched a packet sniffing app (Charles Proxy) and the response for the first message comes back to the client empty (0 bytes). Every message after the first one times out. The WCF service is part of a larger application that uses ASP .NET 3.5. That application has been working fine on IIS 5.1 for awhile now so I think it's something specific to WCF. I also tried throwing an exception in the SVC file to see if it made it that far and the exception never got thrown so I have a feeling it's something more low level that's not working. Any thoughts? Is there anything I need to install on the IIS5 server? If so how am I still able to view the WSDL in my browser?

    Read the article

  • Java Input/Output streams for unnamed pipes created in native code?

    - by finrod
    Is there a way to easily create Java Input/Output streams for unnamed pipes created in native code? Motivation: I need my own implementation of the Process class. The native code spawns me a new process with subprocess' IO streams redirected to unnamed pipes. Problem: The file descriptors for correct ends of those pipes make their way to Java. At this point I get stuck as I cannot create a new FileDescriptor which I could pass to FileInput/FileOutput stream. I have used reflection to get around the problem and got communication with a simple bouncer sub-process running. However I have a notion that it is not the cleanest way to go. Have you used this approach? Do you see any problems with this approach? (the platform will never change) Searching around the internets revealed similar solution using native code. Any thoughts before I dive into heavy testing of this approach are very welcome. I would like to give a shot to existing code before writing my own IO stream implementations... Thank you.

    Read the article

  • Aggregate path counts using HierarchyID

    - by austincav
    Business problem - understand process fallout using analytics data. Here is what we have done so far: Build a dictionary table with every possible process step Find each process "start" Find the last step for each start Join dictionary table to last step to find path to final step In the final report output we end up with a list of paths for each start to each final step: User Fallout Step HierarchyID.ToString() A 1/1/1 B 1/1/1/1/1 C 1/1/1/1 D 1/1/1 E 1/1 What this means is that five users (A-E) started the process. Assume only User B finished, the other four did not. Since this is a simple example (without branching) we want the output to look as follows: Step Unique Users 1 5 2 5 3 4 4 2 5 1 The easiest solution I could think of is to take each hierarchyID.ToString(), parse that out into a set of subpaths, JOIN back to the dictionary table, and output using GROUP BY. Given the volume of data, I'd like to use the built-in HierarchyID functions, e.g. IsAncestorOf. Any ideas or thoughts how I could write this? Maybe a recursive CTE?

    Read the article

  • What's the use of writing tests matching configuration-like code line by line?

    - by Pascal Van Hecke
    Hi, I have been wondering about the usefulness of writing tests that match code one-by-one. Just an example: in Rails, you can define 7 restful routes in one line in routes.rb using: resources :products BDD/TDD proscribes you test first and then write code. In order to test the full effect of this line, devs come up with macros e.g. for shoulda: http://kconrails.com/2010/01/27/route-testing-with-shoulda-in-ruby-on-rails/ class RoutingTest < ActionController::TestCase # simple should_map_resources :products end I'm not trying to pick on the guy that wrote the macros, this is just an example of a pattern that I see all over Rails. I'm just wondering what the use of it is... in the end you're just duplicating code and the only thing you test is that Rails works. You could as well write a tool that transforms your test macros into actual code... When I ask around, people answer me that: "the tests should document your code, so yes it makes sense to write them, even if it's just one line corresponding to one line" What are your thoughts?

    Read the article

  • facebook graph api does not return all feed items on facebook page

    - by Nick Franceschina
    at the time of this question, if you go here: http://www.facebook.com/realplayer you'll see six posts down, I have posted a photo with a message of "#highfive Cincinnati, OH" but if you to either of these: http://graph.facebook.com/realplayer/feed http://graph.facebook.com/realplayer/tagged the JSON that is returned seemingly includes everything on the wall, except for MY post. there is another photo post from someone else down below mine, and it is showing up (and both my photo and his photo are in the "Fan photos" section) obviously, since I can see everything with these links already, it appears that access_token is not a part of the equation... BUT, some more info: if I use an access_token from a session that isn't me, I can't see the post in the JSON if I use an access_token from MY logged in session, then I DO see the post in the JSON so I'm very confused. if everyone in the world can see those posts on the wall without even authenticating, then I expect all of them to come back in the graph api as well. anyone have thoughts on this? I am aware of the "manage_page" permission... which I can use to get a list of accounts and special offline access tokens for those pages... and that's something I can explore... but it seems like alot of work when my post seemingly SHOULD be there in the graph

    Read the article

  • iPhone Development - calling external JSON API (will Apple reject?)

    - by RPM1984
    Ok guys, so im new to iPhone development, so apologies if this is a silly question, but before i actually create my app i want to know if this is possible, and if Apple will reject this. (Note this is all theoretical) So i'd have a API (.NET) that runs on a cloud server somewhere and can return HTML/JSON/XML. I'll have a website that can access this API and allow customers to do some stuff (but this is not important for this question). I would then like my iPhone app to make a call to this API which would return JSON data. So my iPhone app might make a call to http://myapp/Foos which would return a JSON string of Foo objects. The iPhone app would then parse this JSON and do some funky stuff with it. So, that's the background, now the questions: Is this possible? (that is, call an external cloud API over HTTP, parse JSON response?) What are the chances of Apple rejecting this application (because it would be calling a non-Apple API) Are there any limitations (security, libraries, etc) on the iPhone/Objective-C/Cocoa that might hinder this solution? On this website, they seem to be doing exactly what im asking. Thoughts, suggestions, links would be greatly appreciated...

    Read the article

  • Use MySQL trigger to update another table when duplicate key found

    - by Jason
    Been scratching my head on this one, hoping one of you kind people and direct me towards solving this problem. I have a mysql table of customers, it contains a lot of data, but for the purpose of this question, we only need to worry about 4 columns 'ID', 'Firstname', 'Lastname', 'Postcode' Problem is, the table contains a lot of duplicated customers. A new table is being created where each customer is unique and for us, we decide a unique customer is based on 'Firstname', 'Lastname' and 'Postcode' However, (this is the important bit) we need to ensure each new "unique" customer record also can be matched to the original multiple entries of that customer in the original table. I believe the best way to do this is to have a third table, that has 'NewUniqueID', 'OldCustomerID'. So we can search this table for 'NewUniqueID' = '123' and it would return multiple 'OldCustomerID' values where appropriate. I am hoping to make this work using a trigger and the on duplicate key syntax. So what would happen is as follows: An query is run taking the old customer table and inserting it in to the new unique table. (A standard Insert Select query) On duplicate key continue adding records, but add one entry in to the third table noting the 'NewUniqueID' that duped along with the 'OldCustomerID' of the record we were trying to insert. Hope this makes sense, my apologies if it isn't clear. I welcome and appreciate any thoughts on this one! Many thanks Jason

    Read the article

  • Determining the popularity of a video with ratings and views

    - by user295825
    I am about to embark on a new project - a video website. Users will be able to register, and vote on videos by clicking "like" or "dislike", or something to that effect. In any event, it will be a 2-option voting system, not a 5-star system. Every X number of days, I will be generating a "chart" of the most popular videos. So my question is: how should I determine the popularity of a given video? If I went the route of tallying up the videos with the most views, this could have the effect of exceptionally bad videos making it to the of the charts (just because they're so bad). If I go the route of a scoring system based on the amount of "like" and "dislike" votes (eg. 100 like votes, and 50 dislike votes equals a score of 2), videos with few views could appear on the top of the charts. So, what I need to do is a combination of the two. Barring, of course, spammy views and votes. What's your guys' thoughts on the subject?

    Read the article

  • ColdFusion Session issue - multiple users behind one proxy IP -- cftoken and cfid seems to be shared

    - by smoothoperator
    Hi Everyone, I have an application that uses coldfusion's session management (instead of the J2EE) session management. We have one client, who has recently switched their company's traffic to us to come viaa proxy server in their network. So, to our Coldfusion server, it appears that all traffic is coming from this one IP Address, for all of the accounts of this one company.. Of the session variables, Part 1 is kept in a cflock, and Part 2 is kept in editable session variables. I may be misundestanding, but we have done it this way as we modify some values as needed throughout the application's usage. We are now running into an issue of this client having their session variables mixed up (?). We have one case where we set a timestamp.. and when it comes time to look it up, it's empty. From the looks of it this is happening because of another user on the same token. My initial thoughts are to look into modifying our existing session management to somehow generate a unique cftoken/cfid, or to start using jsession_ID, if this solves the problem at all. I have done some basic research on this issue and couldn't find anything similar, so I thought I'd ask here. Thanks!

    Read the article

  • How to access a web service behind a NAT?

    - by jr
    We have a product we are deploying to some small businesses. It is basically a RESTful API over SSL using Tomcat. This is installed on the server in the small business and is accessed via an iPhone or other device portable device. So, the devices connecting to the server could come from any number of IP addresses. The problem comes with the installation. When we install this service, it seems to always become a problem when doing port forwarding so the outside world can gain access to tomcat. It seems most time the owner doesn't know router password, etc, etc. I am trying to research other ways we can accomplish this. I've come up with the following and would like to hear other thoughts on the topic. Setup a SSH tunnel from each client office to a central server. Basically the remote devices would connect to that central server on a port and that traffic would be tunneled back to Tomcat in the office. Seems kind of redundant to have SSH and then SSL, but really no other way to accomplish it since end-to-end I need SSL (from device to office). Not sure of performance implications here, but I know it would work. Would need to monitor the tunnel and bring it back up if it goes done, would need to handle SSH key exchanges, etc. Setup uPNP to try and configure the hole for me. Would likely work most of the time, but uPNP isn't guaranteed to be turned on. May be a good next step. Come up with some type of NAT transversal scheme. I'm just not familiar with these and uncertain of how they exactly work. We have access to a centralized server which is required for the authentication if that makes it any easier. What else should I be looking at to get this accomplished?

    Read the article

  • Git repos over multiple machines - backups and keeping in sync

    - by a-or-b
    I'm new to git so please feel free to RTFM me... I have multiple development sites (none of which can communicate via a network with each other) and am working on a few projects (with a few people) at any one time. What I would ideally have is at each site a centralized repository that can be pulled from but development would occur in our own (personal) repos. Then I would like to be able to sync across the centralized repos (via USB key for example). I want a centralized repo at each location as (1) I'm new to git and do break my (personal) local repo by playing around and (2) some projects get put on hold so I want to be able to free up disk space by deleting them. This is the "backup" part of my question. I was also hoping to be able to use 'git clone --bare' for my centralized repos (and the USB key repos to?) as we don't need the full checkout, just the git benefits. However I can't seem to get a bare repo to work as repo I can push from. I've used 'git remote' to set up an remote origin (similar to http://toolmantim.com/thoughts/setting_up_a_new_remote_git_repository) but I can't get 'git push' to work - it seems I need a checked-out repo. . Does anyone else use this sort of repo/development structure or is there something fundamental about git usage that I'm missing? . A solution that I thought about that might not work - If I had a 'git clone --bare' at each site and then use a git repo on my removable media which has remotes set up for each site then I could ('pull') sync my USB key with each repo. But then can I update the site repo from my USB key? Could I push from USB?

    Read the article

  • PHP Combo Box AJAX Refresh

    - by bhs
    I have a PHP page that currently has 4 years of team positions in columns on the page. The client wants to select the players in positions and have first, second and thrid choices. Currently the page shows 4 columns with sets of combos for each position. Each combo has a full set of players in it and the user chooses the player he wants from the combos. On submit the player positions are stored in the database. However, the client now wants to change the page so that when he selects a player in a year and position then the player is removed from the combo and can no longer be selected for that year. I've used a bit of AJAX but was wondering if anyone had any thoughts/suggestions. The page is currently quite slow so they want it speeded up as well. The page layout is currently like this POISTION YEAR1 YEAR2 YEAR3 YEAR4 1 COMBOC1 COMBOC1 COMBOC1 COMBOC1 COMBOC2 COMBOC2 COMBOC2 COMBOC2 COMBOC3 COMBOC3 COMBOC3 COMBOC3 2 same as above COMBOC1, 2 and 3 all currently have the same players - when a player is selected it needs to be removed for all combos below it. I was thinking of starting by changing the page design and having text boxes for the players and a single player select under each year like this: POISTION YEAR1 YEAR2 YEAR3 YEAR4 1 <PLAYER><POSITION><CHOICE> ... [TEXT BOX CHOICE1] [TEXT BOX CHOICE2] [TEXT BOX CHOICE3] 2 ... Then I only have 1 combo box for each year to worry about - I do however have the same problem of refreshing the combo box and removing the player that has been selected, and I'd prefer to do it withough a page submit. Sorry for the long posting - cheers

    Read the article

  • memcmp,strcmp,strncmp in C

    - by el10780
    I wrote this small piece of code in C to test memcmp() strncmp() strcmp() functions in C. Here is the code that I wrote: #include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc, char** argv) { char *word1="apple",*word2="atoms"; if (strncmp(word1,word2,5)==0) printf("strncmp result.\n"); if (memcmp(word1,word2,5)==0) printf("memcmp result.\n"); if (strcmp(word1,word2)==0) printf("strcmp result.\n"); } Can somebody explain me the differences because I am confused with these three functions?My main problem is that I have a file in which I tokenize its line of it,the problem is that when I tokenize the word "atoms" in the file I have to stop the process of tokenizing.I first tried strcmp() but unfortunately when it reached to the point where the word "atoms" were placed in the file it didn't stop and it continued,but when I used either the memcmp() or the strncmp() it stopped and I was happy.But then I thought,what if there will be a case in which there is one string in which the first 5 letters are a,t,o,m,s and these are being followed by other letters.Unfortunately,my thoughts were right as I tested it using the above code by initializing word1 to "atomsaaaaa" and word2 to atoms and memcmp() and strncmp() in the if statements returned 0.On the other hand strcmp() it didn't.It seems that I must use strcmp(). I have done google searches but I got more confused as I have seen sites and other forums to define these three differently.If it is possible for someone to give me correct explanations/definitions so I can use them correctly in my source code,I would be really grateful.

    Read the article

  • Constructor initializer list: code from the C++ Primer, chapter 16

    - by Alexandros Gezerlis
    Toward the end of Chapter 16 of the "C++ Primer" I encountered the following code (I've removed a bunch of lines): class Sales_item { public: // default constructor: unbound handle Sales_item(): h() { } private: Handle<Item_base> h; // use-counted handle }; My problem is with the Sales_item(): h() { } line. For the sake of completeness, let me also quote the parts of the Handle class template that I think are relevant to my question (I think I don't need to show the Item_base class): template <class T> class Handle { public: // unbound handle Handle(T *p = 0): ptr(p), use(new size_t(1)) { } private: T* ptr; // shared object size_t *use; // count of how many Handles point to *ptr }; I would have expected something like either: a) Sales_item(): h(0) { } which is a convention the authors have used repeatedly in earlier chapters, or b) Handle<Item_base>() if the intention was to invoke the default constructor of the Handle class. Instead, what the book has is Sales_item(): h() { }. My gut reaction is that this is a typo, since h() looks suspiciously similar to a function declaration. On the other hand, I just tried compiling under g++ and running the example code that uses this class and it seems to be working correctly. Any thoughts?

    Read the article

< Previous Page | 138 139 140 141 142 143 144 145 146 147 148 149  | Next Page >