Search Results

Search found 18677 results on 748 pages for 'current'.

Page 618/748 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • I need an IDE for typo3 core development in php

    - by Flugan
    Php in itself is difficult for IDEs because of the dynamic nature of the language. My current development environment is mostly netbeans against a local svn copy of the codebase setup in a local development webserver. The code is full text indexed by vistas search engine for almost instant searches. I do a lot of development directly against the main development server using a combination of tools. Putty to interact with the server and deploy by updating an svn checkout on the development server. Tortoise SVN locally to have a fairly rich SVN experience. Netbeans obviously have SVN integration. Most of the changes on the remote server is commited using the putty session. WinSCP to interact with the development server with norton commander like interface as well as the good putty integration. Finally my text editor for remote editing is notepad++ out of habit and because of some nice features and good price. What I'm really missing is good php editing. Because of the way typo3 works almost all objects are instanciated through make instance abstraction that either returns the base class or the customized class if the framework has been extended. I'm not looking for a magic editing package and would like to find an editor which can use annotations to specify the type of commonly used variables.

    Read the article

  • C# wrapper for objects

    - by Haggai
    I'm looking for a way to create a generic wrapper for any object. The wrapper object will behave just like the class it wraps, but will be able to have more properties, variable, methods etc., for e.g. object counting, caching etc. Say the wrapper class be called Wrapper, and the class to be wrapped be called Square and has the constructor Square(double edge_len) and the properties/methods EdgeLength and Area, I would like to use it as follows: Wrapper<Square> mySquare = new Wrapper<Square>(2.5); /* or */ new Square(2.5); Console.Write("Edge {0} -> Area {1}", mySquare.EdgeLength, mySquare.Area); Obviously I can create such a wrapper class for each class I want to wrap, but I'm looking for a general solution, i.e. Wrapper<T> which can handle both primitive and compound types (although in my current situation I would be happy with just wrapping my own classes). Suggestions? Thanks.

    Read the article

  • C#; On casting to the SAME class that came from another assembly

    - by G. Stoynev
    For complete separation/decoupling, I've implemented a DAL in an assebly that is simply being copied over via post-build event to the website BIN folder. The website then on Application Start loads that assembly via System.Reflection.Assembly.LoadFile. Again, using reflection, I construct a couple of instances from classes in that assembly. I then store a reference to these instances in the session (HttpContext.Current.Items) Later, when I try to get the object stored in the session, I am not able to cast them to their own types (was trying interfaces initially, but for debugging tried to cast to THEIR OWN TYPES), getting this error: [A]DAL_QSYSCamper.NHibernateSessionBuilder cannot be cast to [B] DAL_QSYSCamper.NHibernateSessionBuilder. Type A originates from 'DAL_QSYSCamper, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' in the context 'Default' at location 'C:\Users\dull.anomal\AppData\Local\Temp\Temporary ASP.NET Files\root\ad6e8bff\70fa2384\assembly\dl3\aaf7a5b0\84f01b09_b10acb01\DAL_QSYSCamper.DLL'. Type B originates from 'DAL_QSYSCamper, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' in the context 'LoadNeither' at location 'C:\Users\dull.anomal\Documents\Projects\QSYS\Deleteme\UI\MVCClient\bin\DAL_QSYSCa mper.DLL'. This is happening while debugging in VS - VS manages to stop into the source DAL project even though I've loaded from assembly and the project is not refferenced by the website project (they're both in the solution). I do understand the error, but I don't understand how and why the assembly is being used/loaded from two locations - I only load it once from the file and there's no referrence to the project. Should mention that I also use Windsor for DI. The object that tries to extract the object from the session is A) from a class from that DAL assembly; B) is injected into a website class by Windsor. I will work on adding some sample code to this question, but wanted to put it out in case it's obvious what I do wrong.

    Read the article

  • What's the cleanest way to do byte-level manipulation?

    - by Jurily
    I have the following C struct from the source code of a server, and many similar: // preprocessing magic: 1-byte alignment typedef struct AUTH_LOGON_CHALLENGE_C { // 4 byte header uint8 cmd; uint8 error; uint16 size; // 30 bytes uint8 gamename[4]; uint8 version1; uint8 version2; uint8 version3; uint16 build; uint8 platform[4]; uint8 os[4]; uint8 country[4]; uint32 timezone_bias; uint32 ip; uint8 I_len; // I_len bytes uint8 I[1]; } sAuthLogonChallenge_C; // usage (the actual code that will read my packets): sAuthLogonChallenge_C *ch = (sAuthLogonChallenge_C*)&buf[0]; // where buf is a raw byte array These are TCP packets, and I need to implement something that emits and reads them in C#. What's the cleanest way to do this? My current approach involves [StructLayout(LayoutKind.Sequential, Pack = 1)] unsafe struct foo { ... } and a lot of fixed statements to read and write it, but it feels really clunky, and since the packet itself is not fixed length, I don't feel comfortable using it. Also, it's a lot of work. However, it does describe the data structure nicely, and the protocol may change over time, so this may be ideal for maintenance. What are my options? Would it be easier to just write it in C++ and use some .NET magic to use that?

    Read the article

  • Snapshot agent obliterates conflicts

    - by mwolfe02
    We are using merge replication in SQL Server 2000. We have a snapshot agent that runs every night that updates the publication snapshot. About six months ago we updated from SQL Server 7.0 to 2000 (that's not a typo). We noticed a sharp decline in conflicts at that time but could not track down the reason. We finally found that the daily snapshot agent is recreating the conflict tables every night. This seems to be a change in functionality from SQL Server 7.0. We were running the snapshot agent before and the conflicts would accumulate. Is there some way to prevent the data in the conflict tables from being lost when the snapshot runs? Can anyone confirm a change in behavior between 7.0 and 2000? Our current plan is to simply stop automatically updating the publication snapshot. Is that a reasonable workaround? Here is the line from the script that is adding the snapshot: exec sp_addpublication_snapshot @publication = N'MyPub' , @frequency_type = 4 , @frequency_interval = 1 , @frequency_relative_interval = 1 , @frequency_recurrence_factor = 0 , @frequency_subday = 1 , @frequency_subday_interval = 5 , @active_start_date = 0 , @active_end_date = 0 , @active_start_time_of_day = 500 , @active_end_time_of_day = 235959 Here is the step that runs in the agent job: Step Name: Run agent. Type: Replication Snapshot Command: -Publisher [WCDBS02] -PublisherDB [TaxDB] -Distributor [WCDBS02] -Publication [TaxDB] -ReplicationType 2 -DistributorSecurityMode 1 This appears to be running the Replication Snapshot Agent Utility. There is no mention on that link about dropping and recreating system conflict tables, nor is there any flag that can be set to alter this behavior.

    Read the article

  • Modify columns in a data frame in R more cleanly - maybe using with() or apply()?

    - by Mittenchops
    I understand the answer in R to repetitive things is usually "apply()" rather than loop. Is there a better R-design pattern for a nasty bit of code I create frequently? So, pulling tabular data from HTML, I usually need to change the data type, and end up running something like this, to convert the first column to date format (from decimal), and columns 2-4 from character strings with comma thousand separators like "2,400,000" to numeric "2400000." X[,1] <- decYY2YY(as.numeric(X[,1])) X[,2] <- as.numeric(gsub(",", "", X[,2])) X[,3] <- as.numeric(gsub(",", "", X[,3])) X[,4] <- as.numeric(gsub(",", "", X[,4])) I don't like that I have X[,number] repeated on both the left and ride sides here, or that I have basically the same statement repeated for 2-4. Is there a very R-style way of making X[,2] less repetitive but still loop-free? Something that sort of says "apply this to columns 2,3,4---a function that reassigns the current column to a modified version in place?" I don't want to create a whole, repeatable cleaning function, really, just a quick anonymous function that does this with less repetition.

    Read the article

  • Faster quadrature decoder loops with Python code

    - by Kelei
    I'm working with a BeagleBone Black and using Adafruit's IO Python library. Wrote a simple quadrature decoding function and it works perfectly fine when the motor runs at about 1800 RPM. But when the motor runs at higher speeds, the code starts missing some of the interrupts and the encoder counts start to accumulate errors. Do you guys have any suggestions as to how I can make the code more efficient or if there are functions which can cycle the interrupts at a higher frequency. Thanks, Kel Here's the code: # Define encoder count function def encodercount(term): global counts global Encoder_A global Encoder_A_old global Encoder_B global Encoder_B_old global error Encoder_A = GPIO.input('P8_7') # stores the value of the encoders at time of interrupt Encoder_B = GPIO.input('P8_8') if Encoder_A == Encoder_A_old and Encoder_B == Encoder_B_old: # this will be an error error += 1 print 'Error count is %s' %error elif (Encoder_A == 1 and Encoder_B_old == 0) or (Encoder_A == 0 and Encoder_B_old == 1): # this will be clockwise rotation counts += 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) elif (Encoder_A == 1 and Encoder_B_old == 1) or (Encoder_A == 0 and Encoder_B_old == 0): # this will be counter-clockwise rotation counts -= 1 print 'Encoder count is %s' %counts print 'AB is %s %s' % (Encoder_A, Encoder_B) else: #this will be an error as well error += 1 print 'Error count is %s' %error Encoder_A_old = Encoder_A # store the current encoder values as old values to be used as comparison in the next loop Encoder_B_old = Encoder_B # Initialize the interrupts - these trigger on the both the rising and falling GPIO.add_event_detect('P8_7', GPIO.BOTH, callback = encodercount) # Encoder A GPIO.add_event_detect('P8_8', GPIO.BOTH, callback = encodercount) # Encoder B # This is the part of the code which runs normally in the background while True: time.sleep(1)

    Read the article

  • Spinner original text

    - by user1696863
    I'm trying my Spinner to display "Select City" before the Spinner has itself been clicked by the user. How can I do this? My current XML code is: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:custom="http://schemas.android.com/apk/res/com.olacabs.customer" android:layout_width="fill_parent" android:layout_height="fill_parent" android:background="@drawable/page_background" android:orientation="vertical" > <TextView android:id="@+id/textView1" android:layout_width="fill_parent" android:layout_height="wrap_content" android:background="@android:color/darker_gray" android:gravity="center" android:paddingBottom="4dp" android:paddingTop="4dp" android:text="@string/rate_card" android:textColor="@color/white" android:textSize="20dp" custom:customFont="litera_bold.ttf" /> <Spinner android:id="@+id/select_city" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_marginTop="40dp" android:prompt="@string/selectCity" /> </LinearLayout> Also, what does android:spinnerMode exactly do. I tried changing its value to dropdown but nothing happened and the application still showed a popup dialogue. My activity that implements this XML file is: public class RateCardActivity extends OlaActivity { public void onCreate(Bundle bundle) { super.onCreate(bundle); setContentView(R.layout.rate_card); Spinner spinner = (Spinner) findViewById(R.id.select_city); ArrayAdapter<CharSequence> adapter = ArrayAdapter.createFromResource(this, R.array.select_city, android.R.layout.simple_spinner_item); adapter.setDropDownViewResource(android.R.layout.simple_spinner_dropdown_item); spinner.setAdapter(adapter); } }

    Read the article

  • How do I load PersistentDocuments into the same window

    - by Brad Stone
    I want to open NSPersistentDocuments and load them into the same window one at a time. I'm almost there but missing some steps. Hopefully someone can help me. I have a few saved documents on the hard drive. On launch my app opens to an untitled NSPersistentDocument and creates a separate NSWindowController. When I press the button to load file 1 off the hard drive the data appears in the fields but two things are wrong that I can see: 1) changing the data doesn't make the document dirty 2) choosing save updates the persistentstore (I know this because when I open the file again I see the changes) but I get an error: +entityForName: could not locate an NSManagedObjectModel for entity name 'Book' Here's my code which is in the WindowController that was launched initially with the untitled document. This code isn't perfect. For example, I know I should processPendingChanges and save the current doc before I load the new one. This is test code to try to get over this hurdle. - (IBAction)newBookTwo:(id)sender { NSDocumentController *dc = [NSDocumentController sharedDocumentController]; NSURL *url = [NSURL fileURLWithPath:[@"~/Desktop/File 2.binary" stringByExpandingTildeInPath]]; NSError *error; MainWindowDocument *thisDoc = [dc openDocumentWithContentsOfURL:url display:NO error:&error]; [self setDocument:thisDoc]; [self setManagedObjectContext:[thisDoc managedObjectContext]]; } Thanks!

    Read the article

  • jQuery - Toggle CSS Background Image Expand/Close

    - by urbanrunic
    I am looking for a way to change the back ground image on toggle as in this questions here: jQuery - Toggle Image Expand/Close My issue is I have this html: <button id="close-menu"></button> <div id="menu-bar"> <div class="wrapper"> <a href="http://www.website.com" target="_blank"> <img src="image.png" alt=""> </a> <div class="options"> <h2>eBlast Tools</h2> <ul> <li class="toggle-images">Images are <span>enabled</span>. Click to disable.</li> <li class="download-zip"><a href="download.zip">Download ZIP File</a></li> </ul> </div> </div> </div> and this is my current jquery: $('#close-menu').click(function() { $('#menu-bar').slideToggle(400); return false; }); and this is the css: #close-menu { background: url(../img/minus-button.png); background-position: top left; width: 25px; height: 25px; position: absolute; top: 10px; right: 20px; z-index: 100; border: none; } #close-menu:hover { background: url(../img/minus-button.png); background-position: bottom left; }

    Read the article

  • Javascript document.open asynchronous?

    - by Alex Schneider
    So on my site there is a Javascript function that will load a new site from the server via XMLHttpRequest. After that it replaces the current page with the new one: var post = new XMLHttpRequest(); post.open('POST', data); post.onload = function() { var do = document.open("text/html", "replace"); do.write(post.responseText); do.close(); goOn(); } function goOn() { console.log($('img:visible')); } Some could assume that after do.close() the document has changed and is ready. But it is not, e.g. if i load very much/big data/responseText the function goOn() only logs an empty result. Obviously goOn() gets in that case called before the DOM is ready to be read! Unfortunately the is no "ready" event fired after write() finished.... How can i be sure it is finished? /EDIT: goOn() logs this to Chrome Console: [prevObject: p.fn.p.init[1], context: #document, selector: "img:visible"] context: #document length: 0 prevObject: p.fn.p.init[1] selector: "img:visible" __proto__: Object[0] But if i right after that type $('img:visible') into console manually it shows me all images....

    Read the article

  • Publish Git repository to SVN

    - by Ken Williams
    I and my small team work in Git, and the larger group uses Subversion. I'd like to schedule a cron job to publish our repositories current HEADs every hour into a certain directory in the SVN repo. I thought I had this figured out, but the recipe I wrote down previously doesn't seem to be working now: git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 cd px2 svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX git svn fetch git rebase trunk master git svn dcommit Here's what happens when I attempt: % git clone ssh://me@gitserver/git-repo/Projects/ProjX px2 Cloning into 'ProjX'... ... % cd px2 % svn mkdir --parents http://me@svnserver/svn/repo/play/me/fromgit/ProjX Committed revision 123. % git svn init -s http://me@svnserver/svn/repo/play/me/fromgit/ProjX Using higher level of URL: http://me@svnserver/svn/repo/play/me/fromgit/ProjX => http://me@svnserver/svn/repo % git svn fetch W: Ignoring error from SVN, path probably does not exist: (160013): Filesystem has no item: File not found: revision 100, path '/play/me/fromgit/ProjX' W: Do not be alarmed at the above message git-svn is just searching aggressively for old history. This may take a while on large repositories % git rebase trunk master fatal: Needed a single revision invalid upstream trunk I could have sworn this worked previously, anyone have any suggestions? Thanks.

    Read the article

  • Convert a image to a monochrome byte array

    - by Scott Chamberlain
    I am writing a library to interface C# with the EPL2 printer language. One feature I would like to try to implement is printing images, the specification doc says p1 = Width of graphic Width of graphic in bytes. Eight (8) dots = one (1) byte of data. p2 = Length of graphic Length of graphic in dots (or print lines) Data = Raw binary data without graphic file formatting. Data must be in bytes. Multiply the width in bytes (p1) by the number of print lines (p2) for the total amount of graphic data. The printer automatically calculates the exact size of the data block based upon this formula. I plan on my source image being a 1 bit per pixel bmp file, already scaled to size. I just don't know how to get it from that format in to a byte[] for me to send off to the printer. I tried ImageConverter.ConvertTo(Object, Type) it succeeds but the array it outputs is not the correct size and the documentation is very lacking on how the output is formatted. My current test code. Bitmap i = (Bitmap)Bitmap.FromFile("test.bmp"); ImageConverter ic = new ImageConverter(); byte[] b = (byte[])ic.ConvertTo(i, typeof(byte[])); Any help is greatly appreciated even if it is in a totally different direction.

    Read the article

  • Are you using C++0x today? [closed]

    - by Roger Pate
    This is a question in two parts, the first is the most important and concerns now: Are you following the design and evolution of C++0x? What blogs, newsgroups, committee papers, and other resources do you follow? Even where you're not using any new features, how have they affected your current choices? What new features are you using now, either in production or otherwise? The second part is a follow-up, concerning the new standard once it is final: Do you expect to use it immediately? What are you doing to prepare for C++0x, other than as listed for the previous questions? Obviously, compiler support must be there, but there's still co-workers, ancillary tools, and other factors to consider. What will most affect your adoption? Edit: The original really was too argumentative; however, I'm still interested in the underlying question, so I've tried to clean it up and hopefully make it acceptable. This seems a much better avenue than duplicating—even though some answers responded to the argumentative tone, they still apply to the extent that they addressed the questions, and all answers are community property to be cleaned up as appropriate, too.

    Read the article

  • Currently using View, Should I use a hard table instead?

    - by 1001010101
    I am currently debating whether my table, mapping_uGroups_uProducts, which is a view formed by the following table: CREATE ALGORITHM=UNDEFINED DEFINER=`root`@`localhost` SQL SECURITY DEFINER VIEW `db`.`mapping_uGroups_uProducts` AS select distinct `X`.`upID` AS `upID`,`Z`.`ugID` AS `ugID` from ((`db`.`mapping_uProducts_Products` `X` join `db`.`productsInfo` `Y` on((`X`.`pID` = `Y`.`pID`))) join `db`.`mapping_uGroups_Groups` `Z` on((`Y`.`gID` = `Z`.`gID`))); My current query is: SELECT upID FROM uProductsInfo \ JOIN fs_uProducts USING (upID) column \ JOIN mapping_uGroups_uProducts USING (upID) -- could be faster if we use hard table and index \ JOIN mapping_fs_key USING (fsKeyID) \ WHERE fsName="OVERALL" \ AND ugID=1 \ ORDER BY score DESC \ LIMIT 0,30; which is pretty slow. (for 30 results, it requires about 10 secondes). I think the reason for my query being so slow is definitely due to the fact that that particular query relies on a VIEW which has no index to speed things up. +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ | 1 | PRIMARY | mapping_fs_key | const | PRIMARY,fsName | fsName | 386 | const | 1 | Using temporary; Using filesort | | 1 | PRIMARY | <derived2> | ALL | NULL | NULL | NULL | NULL | 19706 | Using where | | 1 | PRIMARY | uProductsInfo | eq_ref | PRIMARY | PRIMARY | 4 | mapping_uGroups_uProducts.upID | 1 | Using index | | 1 | PRIMARY | fs_uProducts | ref | upID | upID | 4 | db.uProductsInfo.upID | 221 | Using where | | 2 | DERIVED | X | ALL | PRIMARY | NULL | NULL | NULL | 40772 | Using temporary | | 2 | DERIVED | Y | eq_ref | PRIMARY | PRIMARY | 4 | db.X.pID | 1 | Distinct | | 2 | DERIVED | Z | ref | PRIMARY | PRIMARY | 4 | db.Y.gID | 2 | Using index; Distinct | +----+-------------+----------------+--------+----------------+---------+---------+---------------------------------------+-------+---------------------------------+ 7 rows in set (0.48 sec) The explain here looks pretty cryptic, and I don't know whether I should drop view and write a script to just insert everything in the view to a hard table. ( obviously, it will lose the flexibility of the view since the mapping changes quite frequently). Does anyone have any idea to how I can optimize my schema better?

    Read the article

  • Freelance web hosting - what are good LAMP choices?

    - by tkotitan
    I think it's best if I ask this question with an example scenario. Let's say your mom-and-pop local hardware store has never had a website, and they want you, the freelance developer to build them a website. You have all the skills to run a LAMP setup and admin a system, so the difficult question you ask yourself is - where will I host it? As you aren't going to host it out of the machine in your apartment. Let's say you want to be able to customize your own system, install the version of PHP you want, and manage your own database. Perhaps the best kind of hosting is to get a virtual machine so you can customize the system as you see fit. But this essentially a "set it and forget it" site you make, bill by the hour for, and then are done. In other words, the hosting should not be an issue. Given the requirements of hosting a website: Unlimited growth potential needing good amounts of bandwidth to handle visitors Wide range of system and programming options allowing it to be portable Relatively cheap (not necessarily the cheapest) or reasonable scaling cost Reliable hosting with good support Hosted entirely on the host company's hardware Who would you pick to host this website? Yes I am asking for a business/company recommendation. Is there a clear answer for this scenario, or a good source that can reliably give the current answer? I know there are all kinds of schemes out there. I'm just wondering if any one company fills the bill for freelancers and stands out in such a crowded market.

    Read the article

  • Licensing for the undecided

    - by Jasper
    I am creating a game in C++. Now I am not sure on how I will distribute it yet - though I am pretty sure that I won't be asking money for it. And I am looking into a licensing and I wondered if there is a license that is suited for the undecided like me. My current releases (which are really really early versions of the game, with far from full functionality) are executable only. However, I am actually thinking that I might release the source on an open source license. For now, I am the only contributor, so that would be no problem as I only need my own permission to move to a less restrictive license. However, when I allow other people to contribute, I would need all their permissions to do so (right?). So I was wondering if there is a license that let's me distribute the game executable only for now, but will let me switch to a less restrictive license if I want. Basically I need a license in which contributors give permission to switch to a less restrictive license up front. Does anybody know of license (or other construction) that would allow me to do so?

    Read the article

  • Help with create action in a different show page

    - by Andrew
    Hi, I'm a Rails newbie and want to do something but keep messing something up. I'd love some help. I have a simple app that has three tables. Users, Robots, and Recipients. Robots belong_to users and Recipients belong_to robots. On the robot show page, I want to be able to create recipients for that robot right within the robot show page. I have the following code in the robot show page which lists current recipients: <table> <% @robot.recipients.each do |recipient| %> <tr> <td><b><%=h recipient.chat_screen_name %></b> via <%=h recipient.protocol_name</td> <td><%= link_to 'Edit', edit_recipient_path(recipient) %>&nbsp;</td> <td><%= link_to 'Delete', recipient, :confirm => 'Are you sure?', :method => :delete %></td> </tr> <% end %> </table> What I'd like to do is have an empty field in which the user can add a new recipient, and have tried the following: I added this to the Robots Show view: <% form_for(@robot.recipient) do |f| %> Enter the screen name<br> <%= f.text_field :chat_screen_name %> <p> <%= f.submit 'Update' %> </p> <% end %> and then this to the Robot controller in the show action: @recipient = Recipient.new @recipients = Recipient.all Alas, I'm still getting a NoMethod error that says: "undefined method `recipient' for #" I'm not sure what I'm missing. Any help would be greatly appreciated. Thank you.

    Read the article

  • Can't add/remove items from a collection while foreach is iterating over it

    - by flockofcode
    If I make my own implementation of IEnumerator interface, then I am able ( inside foreach statement )to add or remove items from a albumsList without generating an exception.But if foreach statement uses IEnumerator supplied by albumsList, then trying to add/delete ( inside the foreach )items from albumsList will result in exception: class Program { static void Main(string[] args) { string[] rockAlbums = { "rock", "roll", "rain dogs" }; ArrayList albumsList = new ArrayList(rockAlbums); AlbumsCollection ac = new AlbumsCollection(albumsList); foreach (string item in ac) { Console.WriteLine(item); albumsList.Remove(item); //works } foreach (string item in albumsList) { albumsList.Remove(item); //exception } } class MyEnumerator : IEnumerator { ArrayList table; int _current = -1; public Object Current { get { return table[_current]; } } public bool MoveNext() { if (_current + 1 < table.Count) { _current++; return true; } else return false; } public void Reset() { _current = -1; } public MyEnumerator(ArrayList albums) { this.table = albums; } } class AlbumsCollection : IEnumerable { public ArrayList albums; public IEnumerator GetEnumerator() { return new MyEnumerator(this.albums); } public AlbumsCollection(ArrayList albums) { this.albums = albums; } } } a) I assume code that throws exception ( when using IEnumerator implementation A supplied by albumsList ) is located inside A? b) If I want to be able to add/remove items from a collection ( while foreach is iterating over it), will I always need to provide my own implementation of IEnumerator interface, or can albumsList be set to allow adding/removing items? thank you

    Read the article

  • How to get an embedded function to run multiple times

    - by Guy Montag
    The question I have is how to I get multiple instances of a function to run. Here is my function below - A simple fade function. Problem I'm having is that when it is called a second time it abandons the first call. So if a user clicks on a button it will display a message which fades. If the user clicks on another button the previous fading message just stops at the current opacity level. Try it here - www.arcmarks.com ( please do not repost this domain name) click on SignUp and than quickly click on SignIn with out typing anything. You will see the previous message simply halts. ? What is the stopping mechanism? Where did the previous function go? The function function newEffects(element, direction, max_time ) { newEffects.arrayHold = []; newEffects.arrayHold[element.id] = 0; function next() { newEffects.arrayHold[element.id] += 10; if ( direction === 'up' ) { element.style.opacity = newEffects.arrayHold[element.id] / max_time; } else if ( direction === 'down' ) { element.style.opacity = ( max_time - newEffects.arrayHold[element.id] ) / max_time; } if ( newEffects.arrayHold[element.id] <= max_time ) { setTimeout( next, 10 ); } } next(); return true; }; The Call newEffects(this.element, 'down', 4000 );

    Read the article

  • Scheduling a Delay Job on Heroku with a Worker Dyno

    - by user1524775
    I'm currently using Heroku's scheduler to run a script. However, the time that the script takes to run is going to increase from a few milliseconds to a few minutes. I'm looking at using the delayed_job gem to push this process off to a Worker Dyno. I want to continue to run this script once-a-day, just offload it to the worker. My current rake task is: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.process puts "... done." end Once the gem is installed, migration run, and worker dyno started, will the script just need to change to: desc "This task updates some stuff for you." task :update_some_stuff => :environment do puts "Updating some stuff ..." SomeClass.new.delay.process puts "... done." end With this task still being a rake task scheduled by Heroku's Scheduler, is the only thing that needs to happen here the introduction of the delay method to put this in the Worker's queue? Thanks in advance for any help.

    Read the article

  • Undesired Output of Crontab Job Using CURL

    - by Russell C.
    I have written a perl script that runs as a daily crontab job that uploads files to Amazon S3 via CURL. I want the output of the cron job emailed to me which works fine but I don't want that email to include messages related to the CURL upload (only those message my script is outputting). Here are the CURL related messages I'm seeing in the daily email right now: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 230M 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 0 230M 0 0 0 544k 0 1519k 0:02:35 --:--:-- 0:02:35 1807k 0 230M 0 0 0 1744k 0 1286k 0:03:03 0:00:01 0:03:02 1342k 1 230M 0 0 1 2880k 0 1219k 0:03:13 0:00:02 0:03:11 1250k 1 230M 0 0 1 4016k 0 1198k 0:03:17 0:00:03 0:03:14 1218k 2 230M 0 0 2 5168k 0 1186k 0:03:19 0:00:04 0:03:15 1202k 2 230M 0 0 2 6336k 0 1181k 0:03:19 0:00:05 0:03:14 1157k 3 230M 0 0 3 7488k 0 1177k 0:03:20 0:00:06 0:03:14 1147k 3 230M 0 0 3 8592k 0 1167k 0:03:22 0:00:07 0:03:15 1142k 4 230M 0 0 4 9744k 0 1166k 0:03:22 0:00:08 0:03:14 1145k 4 230M 0 0 4 10.6M 0 1163k 0:03:23 0:00:09 0:03:14 1142k 5 230M 0 0 5 11.7M 0 1161k 0:03:23 0:00:10 0:03:13 1140k 5 230M 0 0 5 12.8M 0 1158k 0:03:23 0:00:11 0:03:12 1133k 6 230M 0 0 6 13.9M 0 1155k 0:03:24 0:00:12 0:03:12 1138k 6 230M 0 0 6 15.0M 0 1155k 0:03:24 0:00:13 0:03:11 1138k 7 230M 0 0 7 16.1M 0 1152k 0:03:25 0:00:14 0:03:11 1131k 7 230M 0 0 7 17.2M 0 1152k 0:03:25 0:00:15 0:03:10 1132k 7 230M 0 0 7 18.4M 0 1152k 0:03:24 0:00:16 0:03:08 1140k I am using a simple Perl system() call to invoke CURL. Does anyone know what command line argument I can supply CURL to turn off the reporting of the upload progress? Thanks in advance for your help!

    Read the article

  • How to reorder table rows (drag-and-drop) along with their sub rows

    - by Eirik Johansen
    I have a table which looks like this (simplified for the example): <table> <tr class="lvl_1"> <td> Level 1 </td> </tr> <tr class="lvl_2"> <td> Level 2 </td> </tr> <tr class="lvl_3"> <td> Level 3 </td> </tr> <tr class="lvl_1"> <td> Level 1 </td> </tr> <tr class="lvl_2"> <td> Level 2 </td> </tr> <tr class="lvl_3"> <td> Level 3 </td> </tr> The content in the rows with the lvl_3 class are children of the previous lvl_2 row, and the lvl_2 rows are children of the previous lvl_1. Had the data been a list, it would have looked something like this: Level 1 -- Level 2 ---- Level 3 Level 1 -- Level 2 ---- Level 3 I'm not looking to implement drag-and-drop sorting functionality, make it possible to rearrange the level 1 and two rows. The tricky part is that once I start moving a row, the corresponding children (and grand-children, if any) should move along with it. Is this even possible with the current markup, or do I have to rearrange the code? Thanks in advance !

    Read the article

  • Use SQL to clone a tree structure represented in a database

    - by AmoebaMan17
    Given a table that represents a hierarchical tree structure and has three columns ID (Primary Key, not-autoincrementing) ParentGroupID SomeValue I know the lowest most node of that branch, and I want to copy that to a new branch with the same number of parents that also need to be cloned. I am trying to write a single SQL INSERT INTO statement that will make a copy of every row that is of the same main has is part one GroupID into a new GroupID. Example beginning table: ID | ParentGroupID | SomeValue ------------------------ 1 | -1 | a 2 | 1 | b 3 | 2 | c Goal after I run a simple INSERT INTO statement: ID | ParentGroupID | SomeValue ------------------------ 1 | -1 | a 2 | 1 | b 3 | 2 | c 4 | -1 | a-cloned 5 | 4 | b-cloned 6 | 5 | c-cloned Final tree structure +--a (1) | +--b (2) | +--c (3) | +--a-cloned (4) | +--b-cloned (5) | +--c-cloned (6) The IDs aren't always nicely spaced out as this demo data is showing, so I can't always assume that the Parent's ID is 1 less than the current ID for rows that have parents. Also, I am trying to do this in T-SQL (for Microsoft SQL Server 2005 and greater). This feels like a classic exercise that should have a pure-SQL answer, but I'm too used to programming that my mind doesn't think in relational SQL.

    Read the article

  • Most watched videos this week

    - by Jan Hancic
    I have a youtube like web-page where users upload&watch videos. I would like to add a "most watched videos this week" list of videos to my page. But this list should not contain just the videos that ware uploaded in the previous week, but all videos. I'm currently recording views in a column, so I have no information on when a video was watched. So now I'm searching for a solution to how to record this data. The first is the most obvious (and the correct one, as far as I know): have a separate table in which you insert a new line every time you want to record a new view (storing the ID of the video and the timestamp). I'm worried that I would quickly get huge amounts of data in this table, and queries using this table would be extremely slow (we get about 3 million views a month). The second solution isn't as flexible but is more easy on the database. I would add 7 columns to the "videos" table (one for each day of the week): views_monday, views_tuesday , views_wednesday, ... And increment the value in the correct column based on the day it is. And I would reset the current day's column to 0 at midnight. I could then easily get the most watched videos of the week by summing this 7 columns. What do you think, should I bother with the first solution or will the second one suffice for my case? If you have a better solution please share! Oh, I'm using MySQL.

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >