Search Results

Search found 1675 results on 67 pages for 'basis vasis'.

Page 54/67 | < Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >

  • Core principles, rules, and habits for CS students

    - by Asad Butt
    No doubt there is a lot to read on blogs, in books, and on Stack Overflow, but can we identify some guidelines for CS students to use while studying? For me these are: Finish your course books early and read 4-5 times more material relative to your course work. Programming is the one of the fastest evolving professions. Follow the blogs on a daily basis for the latest updates, news, and technologies. Instead of relying on assignments and exams, do at least one extra, non-graded, small to medium-sized project for every programming course. Fight hard for internships or work placements even if they are unpaid, since 3 months of work 1 year at college. Practice everything, every possible and impossible way. Try doing every bit of your assignments project yourself; i.e. fight for every inch. Rely on documentation as the first source for help and samples, Google, and online forums as the last source. Participate often in online communities and forums to learn the best possible approach for every solution to your problem. (After doing your bit.) Make testing one of your habits as it is getting more important everyday in programming. Make writing one of your habits. Write something productive once or twice a week and publish it.

    Read the article

  • C#: Take Out Image Portion of JPEG to Backup Metadata?

    - by Carlo Mendoza
    This will be a little backwards from the typical approach. I've used ExifTool for metadata manipulation before, but I really want to keep the best metadata backup I can before I make anything permanent. What I want to do is remove the compressed image portion of a JPEG file to leave everything else intact. That's backing up EXIF, Makernotes, IPTC, XMP, etc whether at the beginning or end of the file. What I've tried so far is to strip all metadata from a copy of the original JPEG, and use it as a basis of what bytes will be taken out of the original. After looking at the raw data, it doesn't seem like the stripped copy is contiguous in the original copy. There may be some header information still remaining in the stripped version. I don't really know. Not a good way to do it, I suppose. Are there any markers that will absolutely tell me where the compressed JPEG image data starts and ends? I understand that JPEG files have 0xFFD8 and 0xFFD9 to mark the start and end of the image, but have come to find out that metadata is actually between those markers. I'm using C#. Thank you.

    Read the article

  • How to make a request from an android app that can enter a Spring Security secured webservice method

    - by johnrock
    I have a Spring Security (form based authentication) web app running CXF JAX-RS webservices and I am trying to connect to this webservice from an Android app that can be authenticated on a per user basis. Currently, when I add an @Secured annotation to my webservice method all requests to this method are denied. I have tried to pass in credentials of a valid user/password (that currently exists in the Spring Security based web app and can log in to the web app successfully) from the android call but the request still fails to enter this method when the @Secured annotation is present. The SecurityContext parameter returns null when calling getUserPrincipal(). How can I make a request from an android app that can enter a Spring Security secured webservice method? Here is the code I am working with at the moment: Android call: httpclient.getCredentialsProvider().setCredentials( //new AuthScope("192.168.1.101", 80), new AuthScope(null, -1), new UsernamePasswordCredentials("joeuser", "mypassword")); String userAgent = "Android/" + getVersion(); HttpGet httpget = new HttpGet(MY_URI); httpget.setHeader("User-Agent", userAgent); httpget.setHeader("Content-Type", "application/xml"); HttpResponse response; try { response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); ... parse xml Webservice Method: @GET @Path("/payload") @Produces("application/XML") @Secured({"ROLE_USER","ROLE_ADMIN","ROLE_GUEST"}) public Response makePayload(@Context Request request, @Context SecurityContext securityContext){ Payload payload = new Payload(); payload.setUsersOnline(new Long(200)); if (payload == null) { return Response.noContent().build(); } else{ return Response.ok().entity(payload).build(); } }

    Read the article

  • What algorithm should i follow to retrieve data in the prescribed format?

    - by Prateek
    I have to retrieve data from a database in which tables are consisting of fields like "ttc, rm, atc and lta" namely. Actually these values are stored on daily basis with a 15 min interval like From_time to_time ttc rm atc lta 00:00 00:15 45 10 35 25, 00:15 00:30 35 10 25 25 and so on .. These values are stored for every day of every month and i want it to be previewed in the prescribed format then what algorithm should i follow. I am confused about how to do comparisons for a format like below mentioned. The format is at this link https://drive.google.com/a/itbhu.ac.in/file/d/0B_J0Ljq64i4Za2J1V0lvbDZ4eGc/edit?usp=sharing To be specific once again, my question is, I have to prepare a report from the retrieved data which is being stored in the databases as explained above. But the report which is going to be prepared will be of entire month. So, to say the least, there may be cases that for two particular days the value of "ttc" would be same for some time so i want it to be listed together (as shown in format). And the confusing part is any of the values "ttc", "rm", "atc", "lta" can be same for any particular interval. So what algorithm should i follow for such comparisons. And if still any query with question, u can ask your doubt. Thanks

    Read the article

  • Threading.Timer invokes asynchronously many methods

    - by Dimitar
    Hi guys! Please help! I call a threading.timer from global.asax which invokes many methods each of which gets data from different services and writes it to files. My question is how do i make the methods to be invoked on a regular basis let's say 5 mins? What i do is: in Global.asax I declare a timer protected void Application_Start() { TimerCallback timerDelegate = new TimerCallback(myMainMethod); Timer mytimer = new Timer(timerDelegate, null, 0, 300000); Application.Add("timer", mytimer); } the declaration of myMainMethod looks like this: public static void myMainMethod(object obj) { MyDelegateType d1 = new MyDelegateType(getandwriteServiceData1); d1.BeginInvoke(null, null); MyDelegateType d2 = new MyDelegateType(getandwriteServiceData2); d2.BeginInvoke(null, null); } this approach works fine but it invokes myMainMethod every 5 mins. What I need is the method to be invoked 5 mins after all the data is retreaved and written to files on the server. How do I do that?

    Read the article

  • The move from IE6/XP to IE8/Win7 and its effect on ASP.NET applications

    - by user311020
    Hello, The company I work for is preparing for application testing in IE8. Previously we have been using IE6. Many of our web applications are written in .NET 1.0 and 1.1 with more recent apps written in 2.x and 3.x. I know IE8 has an IE7 compatibility mode and it says it has a quirks mode, but most of our apps were written for 6, which is not specifically mentioned. Compatibility is for 7, which had a compatibility for 6. I do not know if that is necessarily carried over to 8. In 6 quirks mode was to run 5.5 sites without a problem. With no deeper explanation on any of Microsoft's release notes does it mention quirks mode as 6 compliant or even 5.5, just a basis of what it is (specific DOCTYPEs or no DOCTYPEs). If anyone could shed some light on how sites and apps designed for IE6 should run in IE8 would be greatly appreciated. If anyone else has made a similar move how smooth was the transition? Thanks.

    Read the article

  • Moving from SVN to HG : branching and backup

    - by rorycl
    My company runs svn right now and we are very familiar with it. However, because we do a lot of concurrent development, merging can become very complicated.. We've been playing with hg and we really like the ability to make fast and effective clones on a per-feature basis. We've got two main issues we'd like to resolve before we move to hg: Branches for erstwhile svn users I'm familiar with the "4 ways to branch in Mercurial" as set out in Steve Losh's article. We think we should "materialise" the branches because I think the dev team will find this the most straightforward way of migrating from svn. Consequently I guess we should follow the "branching with clones" model which means that separate clones for branches are made on the server. While this means that every clone/branch needs to be made on the server and published separately, this isn't too much of an issue for us as we are used to checking out svn branches which come down as separate copies. I'm worried, however, that merging changes and following history may become difficult between branches in this model. Backup If programmers in our team make local clones of a branch, how do they backup the local clone? We're used to seeing svn commit messages like this on a feature branch "Interim commit: db function not yet working". I can't see a way of doing this easily in hg. Advice gratefully received. Rory

    Read the article

  • SQL Server 2005 Import from Excel

    - by user327045
    I'd like to know what my best option would be to import data from an excel file on a weekly or monthly basis. At first, I thought I would use SSIS, but after much struggle with seemingly simple tasks, I'm starting to rethink my plan. Would it be better/easier to just write the SQL by hand or use the services of an SSIS package? The basic process will be as follows: A separate process will download an .xls file to a local fileshare. The xls file will have a filename like: 'myfilename MON YY'. I will need to read the month and year from the the filename, reformat it to a sql date and then query a DimDate table to find the corresponding date key. For each row (after the first 2 header rows), insert the data with the date key, unless the row is a total row, then ignore. Here are some of the issues I've been encountering with SSIS: I can parse the date string from a flat file datasource, but can't seem to do it with an excel data source. Also, once parsed, i cannot seem to convert the string to a date in order to perform the lookup for the date key. For example, I want to do something like this: select DateKey from DimDate where ActualDate = convert(datetime, '01-' + 'JAN-10', 120) but i don't think it is possible to use the 'convert' or 'datetime' keywords in an expression builder. I have been also unable to find where I can edit the SQL to ignore the first 2 rows of data. I'm very skeptical of using SSIS because it seems like a Kludgy way of doing something that can probably be accomplished more efficiently writing the SQL yourself, but I may be forced to use SSIS. Thoughts?

    Read the article

  • Does CSS have a "start over" feature?

    - by Rick Wayne
    I'm using calendar_date_select (henceforth CDS) in a Rails application, and have a stupid question. When I embed the CDS component in the middle of an already-CSS-styled page, all manner of things go ugly-wrong with it (spacing, fonts, etc.). Clearly the elements inside the CDS have inherited unwanted stuff from the styles already working in the containing page. Now, I could use a combination of, say, Safari's CSS debugging and analyze what's wrong element-by-element. But that's (A) tedious, and (B) might load up my component's styles with tons of container-defeating special cases. If nothing else, I'm certain to change the containing page's styles in the future and would have to maintain the special cases. My question: Is is possible to have a DIV in a page that essentially backs out all the existing styling? Is there a simple one-liner that will do this? Failing that, can it be done on an element-by-element basis? E.g. I know what tags the CDS generates, so I could list each of them: { p: "#--NOTHING--#"; a: "#--NOTHING--#"; } where #--NOTHING--# is the Magic Turn Off All Inherited Styles incantation. http://code.google.com/p/calendardateselect/ Thanks, peeps.

    Read the article

  • Good tools for keeping the content in test/staging/live environments synchronized

    - by David Stratton
    I'm looking for recommendations on automated folder synchronization tools to keep the content in our three environments synchronized automatically. Specifically, we have several applications where a user can upload content (via a File Upload page or a similar mechanism), such as images, pdf files, word documents, etc. In the past, we had the user doing this to our live server, and as a result, our test and staging servers had to be manually synchronized. Going forward, we will have them upload content to the staging server, and we would like some software to automatically copy the files off to the test and live servers EITHER on a scheduled basis OR as the files get uploaded. I was planning on writing my own component, and either set it up as a scheduled task, or use a FileSystemWatcher, but it occurred to me that this has probably already been done, and I might be better off with some sort of synchronization tool that already exists. On our web site, there are a limited number of folders that we want to keep synchronized. In these folders, it is an all or nothing - we want to make sure the folders are EXACT duplicates. This should make it fairly straightforward, and I would think that any software that can synchronize folders would be OK, except that we also would like the software to log changes. (This rules out simple BATCH files.) So I'm curious, if you have a similar environment, how did you solve the challenge of keeping everything synchronized. Are you aware of a tool that is reliable, and will meet our needs? If not, do you have a recommendation for something that will come close, or better yet, an open source solution where we can get the code and modify it as needed? (preferably .NET). Added Also, I DID google this first, but there are so many options, I am interested mostly in knowing what actually works well vs what they SAY works, which is why I'm asking here.

    Read the article

  • SQLite Databases and Grid Hosting

    - by jocull
    I'm considering moving my site from a GoDaddy shared hosting account to a Media Temple grid hosting account in anticipation of traffic. However, I first have some concerns with the grid hosting setup. My site stores a large personal set of data on a per-user basis (possibly 3-4MB per user). At this rate I was worried about blowing over a 1GB MySQL limit in no time. To deal with this I created distributed SQLite databases per user to store large data objects. It's worked wonderfully so far. SQLite is super fast and simple. I know that reading from and writing to files is different in a Grid Hosting environment. I need to know if this setup is going to cause serious problems. These databases are not (and will not be) highly trafficked. They are personal to the user and will only be touched maybe two locations at the same time (one updating the data hourly at the most, and one or more reading on demand). I'd like to keep this setup as getting additional space (beyond 4GB) on a MySQL database seems to be a real trouble point. Will Grid Hosting cause me serious problems? Thanks.

    Read the article

  • Strange problem with PHP and sessions

    - by Jhorra
    So the basis of this page is I set a session value when the page loads, and clear it on any other page they visit. Then the page can make an ajax call to download a file. If the session value matches the value I pass through the URL I allow them to download the file. If not I return a 404 error. I was having some weird issues, so I removed the 404 and set it to echo out the values instead to see what I was getting. Here is the top of the code on the page: $code = $this->_request->getParam('code'); $confirm = $_SESSION['mp3_code']; echo $code."-1-".$confirm; if($code != $confirm) echo $code."-2-".$confirm;//header("HTTP/1.1 404 Not Found"); else { Here is what displays on the page from the ajax call 12723430-1-12723430-2- As you can see when it echos out the first time they exist, then somehow after I compare them and it fails you see that it echos out blank values like they suddenly ceased to exist. Any ideas?

    Read the article

  • How to (unit-)test data intensive PL/SQL application

    - by doom2.wad
    Our team is willing to unit-test a new code written under a running project extending an existing huge Oracle system. The system is written solely in PL/SQL, consists of thousands of tables, hundreds of stored procedures packages, mostly getting data from tables and/or inserting/updating other data. Our extension is not an exception. Most functions return data from a quite complex SELECT statementa over many mutually bound tables (with a little added logic before returning them) or make transformation from one complicated data structure to another (complicated in another way). What is the best approach to unit-test such code? There are no unit tests for existing code base. To make things worse, only packages, triggers and views are source-controlled, table structures (including "alter table" stuff and necessary data transformations are deployed via channel other than version control). There is no way to change this within our project's scope. Maintaining testing data set seems to be impossible since there is new code deployed to the production environment on weekly basis, usually without prior notice, often changing data structure (add a column here, remove one there). I'd be glad for any suggestion or reference to help us. Some team members tend to be tired by figuring out how to even start for our experience with unit-testing does not cover PL/SQL data intensive legacy systems (only those "from-the-book" greenfield Java projects).

    Read the article

  • Rewriting a for loop in pure NumPy to decrease execution time

    - by Statto
    I recently asked about trying to optimise a Python loop for a scientific application, and received an excellent, smart way of recoding it within NumPy which reduced execution time by a factor of around 100 for me! However, calculation of the B value is actually nested within a few other loops, because it is evaluated at a regular grid of positions. Is there a similarly smart NumPy rewrite to shave time off this procedure? I suspect the performance gain for this part would be less marked, and the disadvantages would presumably be that it would not be possible to report back to the user on the progress of the calculation, that the results could not be written to the output file until the end of the calculation, and possibly that doing this in one enormous step would have memory implications? Is it possible to circumvent any of these? import numpy as np import time def reshape_vector(v): b = np.empty((3,1)) for i in range(3): b[i][0] = v[i] return b def unit_vectors(r): return r / np.sqrt((r*r).sum(0)) def calculate_dipole(mu, r_i, mom_i): relative = mu - r_i r_unit = unit_vectors(relative) A = 1e-7 num = A*(3*np.sum(mom_i*r_unit, 0)*r_unit - mom_i) den = np.sqrt(np.sum(relative*relative, 0))**3 B = np.sum(num/den, 1) return B N = 20000 # number of dipoles r_i = np.random.random((3,N)) # positions of dipoles mom_i = np.random.random((3,N)) # moments of dipoles a = np.random.random((3,3)) # three basis vectors for this crystal n = [10,10,10] # points at which to evaluate sum gamma_mu = 135.5 # a constant t_start = time.clock() for i in range(n[0]): r_frac_x = np.float(i)/np.float(n[0]) r_test_x = r_frac_x * a[0] for j in range(n[1]): r_frac_y = np.float(j)/np.float(n[1]) r_test_y = r_frac_y * a[1] for k in range(n[2]): r_frac_z = np.float(k)/np.float(n[2]) r_test = r_test_x +r_test_y + r_frac_z * a[2] r_test_fast = reshape_vector(r_test) B = calculate_dipole(r_test_fast, r_i, mom_i) omega = gamma_mu*np.sqrt(np.dot(B,B)) # write r_test, B and omega to a file frac_done = np.float(i+1)/(n[0]+1) t_elapsed = (time.clock()-t_start) t_remain = (1-frac_done)*t_elapsed/frac_done print frac_done*100,'% done in',t_elapsed/60.,'minutes...approximately',t_remain/60.,'minutes remaining'

    Read the article

  • Mercurial repository usage with binary files for building setup files

    - by Ryan
    I have an existing Mercurial repository for a C++ application in a small corporate environment. I asked a co-worker to add the setup script to the repository and he added all of the dependency binaries, PDFs, and executable to the repository under an Install directory. I dislike having the binaries and dependencies in the same repository, but I'd like recommendations on best practices. Here are the options I am considering: Create a separate repository for the Installer and related files Create a subrepository for the Installer and related files Use a (yet to be identified) build dependency manager I am concerned with using a subrepository with Mercurial based on what I've read so far and the (apparently) incomplete implementation. I would like to get a project dependency system, e.g. Ivy, but I don't know all of the options and haven't had time yet to try out any options. I thought I'd use TortoiseHg as a basis, and it does not have the TortoiseHg binaries in the repository although it does have some binaries such as kdiff3.exe. Instead it uses setup.py to clone multiple repositories and build the apps. This seems reasonable for OSS, but not so much for corporate environments. Recommendations?

    Read the article

  • iphone coredata fetch-request sorting

    - by satyam
    I'm using core data and fetching the results successfully. I've few questions regarding core data 1. When I add a record, will it be added at the end or at the start of entity. 2. I'm using following code to fetch the data. Array is being populated with all the records. But they are not in the same order as I entered records into entity. why? on what basis default sorting is used? NSFetchRequest* allLatest = [[NSFetchRequest alloc] init]; [allLatest setEntity:[NSEntityDescription entityForName:@"Latest" inManagedObjectContext:[self managedObjectContext]]]; NSError* error = nil; NSArray* records = [managedObjectContext executeFetchRequest:allLatest error:&error]; [allLatest release]; 3. The way that I enter the records, 1,2,3,4......... after some time, I want to delete the records that I entered first(i mean oldest data). Something like delete oldest two records. How to do it?

    Read the article

  • SQL - Updating records based on most recent date

    - by Remnant
    I am having difficulty updating records within a database based on the most recent date and am looking for some guidance. By the way, I am new to SQL. As background, I have a windows forms application with SQL Express and am using ADO.NET to interact with the database. The application is designed to enable the user to track employee attendance on various courses that must be attended on a periodic basis (e.g. every 6 months, every year etc.). For example, they can pull back data to see the last time employees attended a given course and also update attendance dates if an employee has recently completed a course. I have three data tables: EmployeeDetailsTable - simple list of employees names, email address etc., each with unique ID CourseDetailsTable - simple list of courses, each with unique ID (e.g. 1, 2, 3 etc.) AttendanceRecordsTable - has 3 columns { EmployeeID, CourseID, AttendanceDate, Comments } For any given course, an employee will have an attendance history i.e. if the course needs to be attended each year then they will have one record for as many years as they have been at the company. What I want to be able to do is to update the 'Comments' field for a given employee and given course based on the most recent attendance date. What is the 'correct' SQL syntax for this? I have tried many things (like below) but cannot get it to work: UPDATE AttendanceRecordsTable SET Comments = @Comments WHERE AttendanceRecordsTable.EmployeeID = (SELECT EmployeeDetailsTable.EmployeeID FROM EmployeeDetailsTable WHERE (EmployeeDetailsTable.LastName =@ParameterLastName AND EmployeeDetailsTable.FirstName =@ParameterFirstName) AND AttendanceRecordsTable.CourseID = (SELECT CourseDetailsTable.CourseID FROM CourseDetailsTable WHERE CourseDetailsTable.CourseName =@CourseName)) GROUP BY MAX(AttendanceRecordsTable.LastDate) After much googling, I discovered that MAX is an aggregate function and so I need to use GROUP BY. I have also tried using the HAVING keyword but without success. Can anybody point me in the right direction? What is the 'conventional' syntax to update a database record based on the most recent date?

    Read the article

  • what use does the javascript forEach method have (that map can't do)?

    - by JohnMerlino
    Hey all, The only difference I see in map and foreach is that map is returning an array and foreach is not. However, I don't even understand the last line of the foreach method "func.call(scope, this[i], i, this);". For example, isn't "this" and "scope" referring to same object and isn't this[i] and i referring to the current value in the loop? I noticed on another post someone said "Use forEach when you want to do something on the basis of each element of the list. You might be adding things to the page, for example. Essentially, it's great for when you want "side effects". I don't know what is meant by side effects. Array.prototype.map = function(fnc) { var a = new Array(this.length); for (var i = 0; i < this.length; i++) { a[i] = fnc(this[i]); } return a; } Array.prototype.forEach = function(func, scope) { scope = scope || this; for (var i = 0, l = this.length; i < l; i++) func.call(scope, this[i], i, this); } Finally, are there any real uses for these methods in javascript (since we aren't updating a database) other than to manipulate numbers like this: alert([1,2,3,4].map(function(x){ return x + 1})); //this is the only example I ever see of map in javascript. Thanks for any reply.

    Read the article

  • How to customize the process employed by WCF when serializing contract method arguments?

    - by mark
    Dear ladies and sirs. I would like to formulate a contrived scenario, which nevertheless has firm actual basis. Imagine a collection type COuter, which is a wrapper around an instance of another collection type CInner. Both implement IList (never mind the T). Furthermore, a COuter instance is buried inside some object graph, the root of which (let us refer to it as R) is returned from a WCF service method. My question is how can I customize the WCF serialization process, so that when R is returned, the request to serialize the COuter instance will be routed through my code, which will extract CInner and pass it to the serializer instead. Thus the receiving end still gets R, only no COuter instance is found in the object graph. I hoped that http://stackoverflow.com/questions/2220516/how-does-wcf-serialize-the-method-call will contain the answer, unfortunately the article mentioned there (http://msdn.microsoft.com/en-us/magazine/cc163569.aspx) only barely mentions that advanced serialization scenarios are possible using IDataContractSurrogate interface, but no details are given. I am, on the other hand, would really like to see a working example. Thank you very much in advance.

    Read the article

  • Are bad data issues that common?

    - by Water Cooler v2
    I've worked for clients that had a large number of distinct, small to mid-sized projects, each interacting with each other via properly defined interfaces to share data, but not reading and writing to the same database. Each had their own separate database, their own cache, their own file servers/system that they had dedicated access to, and so they never caused any problems. One of these clients is a mobile content vendor, so they're lucky in a way that they do not have to face the same problems that everyday business applications do. They can create all those separate compartments where their components happily live in isolation of the others. However, for many business applications, this is not possible. I've worked with a few clients, one of whose applications I am doing the production support for, where there are "bad data issues" on an hourly basis. Yeah, it's that crazy. Some data records from one of the instances (lower than production, of course) would have been run a couple of weeks ago, and caused some other user's data to get corrupted. And then, a data script will have to be written to fix this issue. And I've seen this happening so much with this client that I have to ask. I've seen this happening at a moderate rate with other clients, but this one just seems to be out of order. If you're working with business applications that share a large amount of data by reading and writing to/from the same database, are "bad data issues" that common in your environment?

    Read the article

  • Group panel of WPF ListBox

    - by rulestein
    I have a listbox that is grouping the items with a GroupStyle. I would like add a control at the bottom of the stackpanel that holds all of the groups. This additional control needs to be part of the scrolling content so that the user would scroll to the bottom of the list to see the control. If I were using a listbox without the groups, this task would be easy by modifying the ListBox template. However, with the items grouped, the ListBox template seems to only apply on a per group basis. I can modify the GroupStyle.Panel, but that doesn't allow me to add items to that panel. <ListBox> <ListBox.ItemsPanel> <ItemsPanelTemplate> <WrapPanel/> </ItemsPanelTemplate> </ListBox.ItemsPanel> <ListBox.GroupStyle> <GroupStyle> <GroupStyle.Panel> <ItemsPanelTemplate> <VirtualizingStackPanel/> **<----- I would like to add a control to this stackpanel** </ItemsPanelTemplate> </GroupStyle.Panel> <GroupStyle.ContainerStyle> <Style TargetType="{x:Type GroupItem}"> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type GroupItem}"> <Grid> <ItemsPresenter /> </Grid> </ControlTemplate> </Setter.Value> </Setter> </Style> </GroupStyle.ContainerStyle> </GroupStyle> </ListBox.GroupStyle> This should give an idea of what I need to do:

    Read the article

  • Real Time Progress Indicator

    - by jignesh
    I am trying to get the real time progress indicator. In the code snippet below,i am trying to set the width of DIV on the basis of the Number of Items retrieve from the Web Service Call Back Function. for (i = 0; i < data.length; i++) { table += '<tr>'; table += '<td>' + data[i].ItemId + '</td>'; table += '<td></td>'; table += '<td>' + data[i].Name + '</td>'; table += '<td>' + data[i].Unit + '</td>'; table += '<td><input type=text value=' + data[i].Quantity + '></td>'; table += '<td>' + data[i].Brands + '</td>'; table += '<td><img border=0 src=../images/Delete_icon.gif></td>'; table += '</tr>'; $('#divProgressIndicator').width((100 * (i / data.length)) + '%'); } As shown in the code above,after the for loop is executed i only see the divProgressIndicator filled with the background color and I am not able to see the progress. Can I achieve this in a for loop or i need to use some thing else. Regards

    Read the article

  • Should uni provide "correct answer" after programming assignment is due?

    - by Michael Mao
    Hi all: This is my very first subjective question. And I think it is programming related - the assignment is to be written in a programming language. I am not for "getting the full marks out of a subject". I am actually not for a "correct answer", but for a "better solution", so that I can compare, and can improve. I reckon it is good that I practice programming first and check the solution later to pick up the things I've done wrong/bad. Without a "benchmark" to against, this would be much harder. Unfortunately as far as I know, not all programming subjects taught in uni would kindly provide the students with a "correct answer" in the end, after the assignment is due. One bad metaphor for this is like someone asks you a question which they don't have a clear answer themselves and hope to take advantage of your answer as the basis for their answer. Personally, I feel having a assignment solution provided by the academic staff is essential to students. I do appreciate this, and I feel I might not be the only one. I am a very proactive student in uni. I learn more, I practice more, an assignment for me is more like a challenge to achieve "the best solution I can come up with", not something "I have to pass"... The cause of this question is that for the past few days I have crafted 500+ lines of Perl code, for a tiny assignment. I feel pain when I look at my solution(not finished yet) and I feel like I am an idiot doing some crap code. I know there must be a much better solution. And I reckon it is better for the lecturer in this subject to get me one, rather than asking for an answer here, even I would shamelessly add the link to my solution apart from the assignment requirements. I know in SO, there are a lot of tutors/lecturers for programming subjects/courses. I'd like to hear your words on this question.

    Read the article

  • If I use Unicode on a ISO-8859-1 site, how will that be interpreted by a browser?

    - by grg-n-sox
    So I got a site that uses ISO-8859-1 encoding and I can't change that. I want to be sure that the content I enter into the web app on the site gets parsed correctly. The parser works on a character by character basis. I also cannot change the parser, I am just writing files for it to handle. The content in my file I am telling the app to display after parsing contains Unicode characters (or at least I assume so, even if they were produced by Windows Alt Codes mapped to CP437). Using entities is not an option due to the character by character operation of the parser. The only characters that the parser escapes upon output are markup sensitive ones like ampersand, less than, and greater than symbols. I would just go ahead and put this through to see what it looks like, but output can only be seen on a publishing, which has to spend a couple days getting approved and such, and that would be asking too much for just a test case. So, long story short, if I told a site to output ?ÇÑ¥?? on a site with a meta tag stating it is supposed to use ISO-8859-1, will a browser auto-detect the Unicode and display it or will it literally translate it as ISO-8859-1 and get a different set of characters?

    Read the article

  • What different terms mean the same thing (or don't, but people think they do)?

    - by Matthew Jones
    One of the pitfalls I run into on a daily basis is customers saying one thing while meaning another. Usually, this is just due to a miscommunication somewhere, but occasionally they are, in fact, saying the same thing I am just using a different term. For example, one of my customers the other day mentioned a feature he called, "find as you type." Being a little confused, I asked him what he meant, and he described the feature in Google where, once you start typing a search query, Google suggests other, popular queries that match the letters you have typed. Click! He meant AutoComplete! He was not wrong, it is just that I had never heard that term before. In the spirit of reducing confusion, what terms can you think of that are different but mean, essentially, the same thing? Also, what terms do people think mean the same thing, but don't. Please differentiate between the two. Please only one set of terms per answer, so we can vote on the best ones.

    Read the article

< Previous Page | 50 51 52 53 54 55 56 57 58 59 60 61  | Next Page >