Search Results

Search found 1014 results on 41 pages for 'iteration'.

Page 34/41 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • What is a truly empty std::vector in C++?

    - by RyanG
    I've got a two vectors in class A that contain other class objects B and C. I know exactly how many elements these vectors are supposed to hold at maximum. In the initializer list of class A's constructor, I initialize these vectors to their max sizes (constants). If I understand this correctly, I now have a vector of objects of class B that have been initialized using their default constructor. Right? When I wrote this code, I thought this was the only way to deal with things. However, I've since learned about std::vector.reserve() and I'd like to achieve something different. I'd like to allocate memory for these vectors to grow as large as possible because adding to them is controlled by user-input, so I don't want frequent resizings. However, I iterate through this vector many, many times per second and I only currently work on objects I've flagged as "active". To have to check a boolean member of class B/C on ever iteration is silly. I don't want these objects to even BE there for my iterators to see when I run through this list. Is reserving the max space ahead of time and using push_back to add a new object to the vector a solution to this?

    Read the article

  • Facebook / Offline Permission - Trying to perform an action on a set of offline users.

    - by blueigloo
    Hi there, We're building an app which in part of its functionality tries to capture the number of likes associated to a particular video owned by a user. Users of the app are asked for extended off-line access and we capture the key for each user: The format is like this: 2.hg2QQuYeftuHx1R84J1oGg__.XXXX.1272394800-nnnnnn Each user has their offline / infinite key stored in a table in a DB. The object_id which we're interested in is also stored in the DB. At a later stage (offline) we try to run a batch job which reads the number of likes for each user's video. (See attached code) For some reason however, after the first iteration of the loop - which yields the likes correctly, we get a failure with the oh so familiar message: "Session key is invalid or no longer valid" Any insight would be most appreciated. Thanks, B List<DVideo> videoList = db.SelectVideos(); foreach (DVideo video in videoList) { long userId = 0; ConnectSession fbSession = new ConnectSession(APPLICATION_KEY, SECRET_KEY); //session key is attached to the video object for now. fbSession.SessionKey = video.UserSessionKey; fbSession.SessionExpires = false; string fbuid =video.FBUID; long.TryParse(fbuid, out userId); if (userId > 0) { fbSession.UserId = userId; fbSession.Login(); Api fbApi = new Facebook.Rest.Api(fbSession); string xmlQueryResult = fbApi.Fql.Query("SELECT user_id FROM like WHERE object_id = " + video.FBVID); XmlDocument xmlDoc = new XmlDocument(); xmlDoc.Load(new StringReader(xmlQueryResult)); int likesCount = xmlDoc.GetElementsByTagName("user_id").Count; //Write entry in VideoWallLikes if (likesCount > 0) { db.CountWallLikes(video.ID, likesCount); } fbSession.Logout(); } fbSession = null; }

    Read the article

  • "Find all tiles connected to this one" project

    - by Omega
    Remember MS Paint? The bucket tool? If you used it and clicked on a pixel, all pixels connected to this pixel that are the same are affected. The theory is, I suppose, to check if there is any pixel adjacent to the selected one. If such pixel is the same type as the selected one, check for more adjacent pixels in this one, and so on. I want to implement something similar in VB.NET. Basically I have a 2D array map which represents the map. Let's assume there are only two types of tile: 0 and 1. Now, I got pretty much everything ready: I got my 2d map and I can tell which tile is clicked and tell what array indexes are the ones that represent such tile. Now for the "painting" process. Whenever I think about it, I can't figure a convenient way to execute such iteration. Can someone help me choosing a correct design/way/tip to achieve this?

    Read the article

  • Visual C++ function suddenly 170x slower

    - by Mikael
    For the past few months I've been working on a Visual C++ project to take images from cameras and process them. Up until today this has taken about 65 ms to update the data but now it has suddenly increased significantly. What happens is: I launch my program and for the first 30 or so iterations it performs as expected, then suddenly the loop time increases from 65 ms to 250 ms. The odd thing is, after timing each function I found out that the part of the code which is causing the slowdown is fairly basic and has not been modified in over a month. The data which goes into it is unchanged and identical every iteration but the execution time which is initially less than 1 ms suddenly increases to 170 ms while the rest of the code is still performing as expected (time-wise). Basically, I am calling the same function over and over, for the first 30 calls it performs as it should, after that it slows down for no apparent reason. It might also be worth noting that it is a sudden change in execution time, not a gradual increase. What could be causing this? The code is leaking some memory (~50 kb/s) but not nearly enough to warrant a sudden 4x slowdown. If anyone has any ideas I'd love to hear them!

    Read the article

  • JavaScript array random index insertion and deletion

    - by Tomi
    I'm inserting some items into array with randomly created indexes, for example like this: var myArray = new Array(); myArray[123] = "foo"; myArray[456] = "bar"; myArray[789] = "baz"; ... In other words array indexes do not start with zero and there will be "numeric gaps" between them. My questions are: Will these numeric gaps be somehow allocated (and therefore take some memory) even when they do not have assigned values? When I delete myArray[456] from upper example, would items below this item be relocated? EDIT: Regarding my question/concern about relocation of items after insertion/deletion - I want to know what happens with the memory and not indexes. More information from wikipedia article: Linked lists have several advantages over dynamic arrays. Insertion of an element at a specific point of a list is a constant-time operation, whereas insertion in a dynamic array at random locations will require moving half of the elements on average, and all the elements in the worst case. While one can "delete" an element from an array in constant time by somehow marking its slot as "vacant", this causes fragmentation that impedes the performance of iteration.

    Read the article

  • Which of the following Java coding fragments is better?

    - by Simon
    This isn't meant to be subjective, I am looking for reasons based on resource utilisation, compiler performance, GC performance etc. rather than elegance. Oh, and the position of brackets doesn't count, so no stylistic comments please. Take the following loop; Integer total = new Integer(0); Integer i; for (String str : string_list) { i = Integer.parse(str); total += i; } versus... Integer total = 0; for (String str : string_list) { Integer i = Integer.parse(str); total += i; } In the first one i is function scoped whereas in the second it is scoped in the loop. I have always thought (believed) that the first one would be more efficient because it just references an existing variable already allocated on the stack, whereas the second one would be pushing and popping i each iteration of the loop. There are quite a lot of other cases where I tend to scope variables more broadly than perhaps necessary so I thought I would ask here to clear up a gap in my knowledge. Also notice that assignment of the variable on initialisation either involving the new operator or not. Do any of these sorts of semi-stylistic semi-optimisations make any difference at all?

    Read the article

  • Turn based synchronization between threads

    - by Amarus
    I'm trying to find a way to synchronize multiple threads having the following conditions: * There are two types of threads: 1. A single "cyclic" thread executing an infinite loop to do cyclic calculations 2. Multiple short-lived threads not started by the main thread * The cyclic thread has a sleep duration between each cycle/loop iteration * The other threads are allowed execute during the inter-cycle sleep of the cyclic thread: - Any other thread that attempts to execute during an active cycle should be blocked - The cyclic thread will wait until all other threads that are already executing to be finished Here's a basic example of what I was thinking of doing: // Somewhere in the code: ManualResetEvent manualResetEvent = new ManualResetEvent(true); // Allow Externally call CountdownEvent countdownEvent = new CountdownEvent(1); // Can't AddCount a CountdownEvent with CurrentCount = 0 void ExternallyCalled() { manualResetEvent.WaitOne(); // Wait until CyclicCalculations is having its beauty sleep countdownEvent.AddCount(); // Notify CyclicCalculations that it should wait for this method call to finish before starting the next cycle Thread.Sleep(1000); // TODO: Replace with actual method logic countdownEvent.Signal(); // Notify CyclicCalculations that this call is finished } void CyclicCalculations() { while (!stopCyclicCalculations) { manualResetEvent.Reset(); // Block all incoming calls to ExternallyCalled from this point forward countdownEvent.Signal(); // Dirty workaround for the issue with AddCount and CurrentCount = 0 countdownEvent.Wait(); // Wait until all of the already executing calls to ExternallyCalled are finished countdownEvent.Reset(); // Reset the CountdownEvent for next cycle. Thread.Sleep(2000); // TODO: Replace with actual method logic manualResetEvent.Set(); // Unblock all threads executing ExternallyCalled Thread.Sleep(1000); // Inter-cycles delay } } Obviously, this doesn't work. There's no guarantee that there won't be any threads executing ExternallyCalled that are in between manualResetEvent.WaitOne(); and countdownEvent.AddCount(); at the time the main thread gets released by the CountdownEvent. I can't figure out a simple way of doing what I'm after, and almost everything that I've found after a lengthy search is related to producer/consumer synchronization which I can't apply here.

    Read the article

  • Javascript append to onClick event

    - by John Hartsock
    Guys I have the following Code which I know doesnt work correctly. Yes I know how to do this in JQuery but In this case I cannot use jquery. Please no jquery answers. <form> <input type="text" name="input1" onclick="alert('hello')"> <input type="text" name="input2"> <input type="text" name="input3"> </form> <script type="text\javascript"> window.onload = function () { var currentOnClick; for (var i = 0; i < document.forms[0].elements.length; i++) { currentOnClick = document.forms[0].elements[i].onclick; document.forms[0].elements[i].onclick = function () { if (currentOnClick) { currentOnClick(); } alert("hello2"); } } } </script> What Im trying to do is iterate through the form's elements and add to the onclick function. But due to the fact that in my last iteration currentOnClick is null this does not run as expected. I want to preserve each of the elements onclick methods and play them back in the new fuction Im creating. What I want: When input1 is clicked, alert "hello" then alert "hello2" When Input2 is clicked, alert "hello2" When Input3 is clicked, alert "hello2"

    Read the article

  • LinQ optimization

    - by Budda
    Here is a peace of code: void MyFunc(List<MyObj> objects) { MyFunc1(objects); foreach( MyObj obj in objects.Where(obj1=>obj1.Good)) { // Do Action With Good Object } } void MyFunc1(List<MyObj> objects) { int iGoodCount = objects.Where(obj1=>obj1.Good).Count(); BeHappy(iGoodCount); // do other stuff with 'objects' collection } Here we see that collection is analyzed twice and each time the value of 'Good' property is checked for each member: 1st time when calculating count of good objects, 2nd - when iterating through all good objects. It is desirable to have that optimized, and here is a straightforward solution: before call to MyFunc1 makecreate an additional temporary collection of good objects only (goodObjects, it can be IEnumerable); get count of these objects and pass it as an additional parameter to MyFunc1; in the 'MyFunc' method iterate not through 'objects.Where(...)' but through the 'goodObjects' collection. Not too bad approach (as far as I see), but additional parameter is required to be passed. Question: is there any LinQ out-of-the-box functionality that allows any caching during 1st Where().Count(), remembering a processed collection and use it in the next iteration? Any thoughts are welcome. Thanks.

    Read the article

  • Modifying Django's pre_save/post_save Data

    - by Rodrogo
    Hi, I'm having a hard time to grasp this post_save/pre_save signals from django. What happens is that my model has a field called status and when a entry to this model is added/saved, it's status must be changed accordingly with some condition. My model looks like this: class Ticket(models.Model): (...) status = models.CharField(max_length=1,choices=OFFERT_STATUS, default='O') And my signal handler, configured for pre_save: def ticket_handler(sender, **kwargs): ticket = kwargs['instance'] (...) if someOtherCondition: ticket.status = 'C' Now, what happens if I put aticket.save() just bellow this last line if statement is a huge iteration black hole, since this action calls the signal itself. And this problem happens in both pre_save and post_save. Well... I guess that the capability of altering a entry before (or even after) saving it is pretty common in django's universe. So, what I'm doing wrong here? Is the Signals the wrong approach or I'm missing something else here? Also, would it be possible to, once this pre_save/post_save function is triggered, to access another model's instance and change a specific row entry on that? Thanks

    Read the article

  • jQuery $.each()-problem

    - by Volmar
    Hi, im making a wordpress plugin and i have a function where i import images, this is done with a $.each()-loop that calls a .load()-function every iteration. The load-function page the load-function calls is downloading the image and returns a number. The number is imported into a span-element. The source and destination Arrays is being imported from LI-elemnts of a hidden ULs. this way the user sees a counter counting from zero up to the total number of images being imported. You can se my jQuery code below: jQuery(document).ready(function($) { $('#mrc_imp_img').click(function(){ var dstA = []; var srcA = []; $("#mrc_dst li").each(function() { dstA.push($(this).text()) }); $("#mrc_src li").each(function() { srcA.push($(this).text()) }); $.each(srcA, function (i,v) { $('#mrc_imgimport span.fc').load('/wp-content/plugins/myplugin/imp.php?num='+i+'&dst='+dstA[i]+'&src='+srcA[i]); }); }); }); This works pretty good but sometimes it looks like the load function isn't updating the DOM as fast as it should because sometimes the numbers that the span is updated with is lower than the previous and almost everytime a lower number is replacing the last number in the end. How can i prevent this from happening and how can i make it hide '#mrc_imp_img' when the $.each-loop is ready?

    Read the article

  • javascript accordion - tracking time question

    - by JohnMerlino
    Hey all, I was reading up on this javascript tutorial: http://www.switchonthecode.com/tutor...ccordion-menus Basically, it shows you how to create an accordion using pure javascript, not jquery. All made sense to me until the actual part of tracking the animation. He says "Because of all that, the first thing we do in the animation function is figure out how much time has passed since the last animation iteration." And then uses this code: Code: var elapsedTicks = curTick - lastTick; lastTick is equal to the value of when the function was called (Date().getTime()) and curTick is equal to the value when the function was received. I don't understand why we are subtracting one from the other right here. I can't imagine that there's any noticeable time difference between these two values. Or maybe I'm missing something. Is that animate() function only called once every time a menu title is clicked or is it called several times to create the incremental animation effect? setTimeout("animate(" + new Date().getTime() + "," + TimeToSlide + ",'" + openAccordion + "','" + nID + "')", 33); Thanks for any response.

    Read the article

  • Beginnerquestion: How to count amount of each number drawn in a Lottery and output it in a list?

    - by elementz
    I am writing this little Lottery application. Now the plan is, to count how often each number has been drawn during each iteration of the Lottery, and store this somewhere. My guess is that I would need to use a HashMap, that has 6 keys and increments the value by one everytime the respective keys number is drawn. But how would I accomplish this? My code so far: public void numberCreator() { // creating and initializing a Random generator Random rand = new Random(); // A HashMap to store the numbers picked. HashMap hashMap = new HashMap(); // A TreeMap to sort the numbers picked. TreeMap treeMap = new TreeMap(); // creating an ArrayList which will store the pool of availbale Numbers List<Integer>numPool = new ArrayList<Integer>(); for (int i=1; i<50; i++){ // add the available Numbers to the pool numPool.add(i); hashMap.put(nums[i], 0); } // array to store the lotto numbers int [] nums = new int [6]; for (int i =0; i < nums.length; i++){ int numPoolIndex = rand.nextInt(numPool.size()); nums[i] = numPool.get(numPoolIndex); // check how often a number has been called and store the new amount in the Map int counter = hashMap.get numPool.remove(numPoolIndex); } System.out.println(Arrays.toString(nums)); } Maybe someone can tell me if I have the right idea, or even how I would implement the map properly?

    Read the article

  • Heapsort not working in Python for list of strings using heapq module

    - by VSN
    I was reading the python 2.7 documentation when I came across the heapq module. I was interested in the heapify() and the heappop() methods. So, I decided to write a simple heapsort program for integers: from heapq import heapify, heappop user_input = raw_input("Enter numbers to be sorted: ") data = map (int, user_input.split(",")) new_data = [] for i in range(len(data)): heapify(data) new_data.append(heappop(data)) print new_data This worked like a charm. To make it more interesting, I thought I would take away the integer conversion and leave it as a string. Logically, it should make no difference and the code should work as it did for integers: from heapq import heapify, heappop user_input = raw_input("Enter numbers to be sorted: ") data = user_input.split(",") new_data = [] for i in range(len(data)): heapify(data) print data new_data.append(heappop(data)) print new_data Note: I added a print statement in the for loop to see the heapified list. Here's the output when I ran the script: `$ python heapsort.py Enter numbers to be sorted: 4, 3, 1, 9, 6, 2 [' 1', ' 3', ' 2', ' 9', ' 6', '4'] [' 2', ' 3', '4', ' 9', ' 6'] [' 3', ' 6', '4', ' 9'] [' 6', ' 9', '4'] [' 9', '4'] ['4'] [' 1', ' 2', ' 3', ' 6', ' 9', '4']` The reasoning I applied was that since the strings are being compared, the tree should be the same if they were numbers. As is evident, the heapify didn't work correctly after the third iteration. Could someone help me figure out if I am missing something here? I'm running Python 2.4.5 on RedHat 3.4.6-9. Thanks, VSN

    Read the article

  • Ruby on Rails: temporarily update an attribute into cache without saving it?

    - by randombits
    I have a bit of code that depicts this hypothetical setup below. A class Foo which contains many Bars. Bar belongs to one and only one Foo. At some point, Foo can do a finite loop that lapses 2+ iterations. In that loop, something like the following happens: bar = Bar.find_where_in_use_is_zero bar.in_use = 1 Basically what find_where_in_use_is_zero does something like this in as far as SQL goes: SELECT * from bars WHERE in_use = 0 Now the problem I'm facing is that I cannot run the following line of code after bar.in_use =1 is invoked: bar.save The reason is clear, I'm still looping and the new Foo hasn't been created, so we don't have a foo_id to put into bars.foo_id. Even if I set to allow foo_id to be NULL, we have a problem where one of the bars can fail validation and the existing one was saved to the database. In my application, that doesn't work. The entire request is atomic, either all succeeds or fails together. What happens next, is that in my loop, I have the potential to select the same exact bar that I did on a previous iteration of the loop since the in_use flag will not be set to 1 until @foo.save is called. Is there anyway to work around this condition and temporarily set the in_use attribute to 1 for subsequent iterations of the loop so that I retrieve an available bar instance?

    Read the article

  • Using Moq to Validate Separate Invocations with Distinct Arguments

    - by Thermite
    I'm trying to validate the values of arguments passed to subsequent mocked method invocations (of the same method), but cannot figure out a valid approach. A generic example follows: public class Foo { [Dependency] public Bar SomeBar { get; set; } public void SomeMethod() { this.SomeBar.SomeOtherMethod("baz"); this.SomeBar.SomeOtherMethod("bag"); } } public class Bar { public void SomeOtherMethod(string input) { } } public class MoqTest { [TestMethod] public void RunTest() { Mock<Bar> mock = new Mock<Bar>(); Foo f = new Foo(); mock.Setup(m => m.SomeOtherMethod(It.Is<string>("baz"))); mock.Setup(m => m.SomeOtherMethod(It.Is<string>("bag"))); // this of course overrides the first call f.SomeMethod(); mock.VerifyAll(); } } Using a Function in the Setup might be an option, but then it seems I'd be reduced to some sort of global variable to know which argument/iteration I'm verifying. Maybe I'm overlooking the obvious within the Moq framework?

    Read the article

  • Returning a dynamically created array from function

    - by informer2000
    I'm trying to create a function that would dynamically allocate an array, sets the values of the elements, and returns the size of the array. The array variable is a pointer that is declared outside the function and passed as a parameter. Here is the code: #include <cstdlib> #include <iostream> using namespace std; int doArray(int *arr) { int sz = 10; arr = (int*) malloc(sizeof(int) * sz); for (int i=0; i<sz; i++) { arr[i] = i * 5; } return sz; } int main(int argc, char *argv[]) { int *arr = NULL; int size = doArray(arr); for (int i=0; i<size; i++) { cout << arr[i] << endl; } return 0; } For some reason, the program terminates on the first iteration of the for loop in main()! Am I doing something wrong?

    Read the article

  • Parallel features in .Net 4.0

    - by Jonathan.Peppers
    I have been going over the practicality of some of the new parallel features in .Net 4.0. Say I have code like so: foreach (var item in myEnumerable) myDatabase.Insert(item.ConvertToDatabase()); Imagine myDatabase.Insert is performing some work to insert to a SQL database. Theoretically you could write: Parallel.ForEach(myEnumerable, item => myDatabase.Insert(item.ConvertToDatabase())); And automatically you get code that takes advantage of multiple cores. But what if myEnumerable can only be interacted with by a single thread? Will the Parallel class enumerate by a single thread and only dispatch the result to worker threads in the loop? What if myDatabase can only be interacted with by a single thread? It would certainly not be better to make a database connection per iteration of the loop. Finally, what if my "var item" happens to be a UserControl or something that must be interacted with on the UI thread? What design pattern should I follow to solve these problems? It's looking to me that switching over to Parallel/PLinq/etc is not exactly easy when you are dealing with real-world applications.

    Read the article

  • Empty value when iterating a dictionary with .iteritems() method

    - by ptpatil
    I am having some weird trouble with dictionaries, I am trying to iterate pairs from a dictionary to pass to another function. The loop for the iterator though for some reason always returns empty values. Here is the code: def LinktoCentral(self, linkmethod): if linkmethod == 'sim': linkworker = Linker.SimilarityLinker() matchlist = [] for k,v in self.ToBeMatchedTable.iteritems(): matchlist.append(k, linkworker.GetBestMatch(v, self.CentralDataTable.items())) Now if I insert a print line above the for loop: matchlist = [] print self.ToBeMatchedTable.items() for k,v in self.ToBeMatchedTable.iteritems(): matchlist.append(k, linkworker.GetBestMatch(v, self.CentralDataTable.items())) I get the data that is supposed to be in the dictionary printed out. The values of the dictionary are list objects. An example tuple I get from the dictionary when printing just above the for loop: >>> (1, ['AARP/United Health Care', '8002277789', 'PO Box 740819', 'Atlanta', 'GA', '30374-0819', 'Paper', '3676']) However, the for loop gives empty lists to the linkworker.GetBestMatch method. If I put a print line just below the for loop, here is what I get: Code: matchlist = [] for k,v in self.ToBeMatchedTable.iteritems(): print self.ToBeMatchedTable.items() matchlist.append(k, linkworker.GetBestMatch(v, self.CentralDataTable.items())) ## Place holder for line to send match list to display window return matchlist Result of first iteration: >>> (0, ['', '', '', '', '', '', '', '']) I literally have no idea whats going on, there is nothing else going on while this loop is executed. Any stupid mistakes I made?

    Read the article

  • Rebuilding indexes does not change the fragmentation % for nonclustered indexes.

    - by Noddy
    For starters, I am no DBA and I am working on rebuilding the indexes. I made use of the amazing TSQL script from msdn to alter index based onthe fragmente percent returned by dm_db_index_physical_stats and if the fragment percent is more than 30 then do a REBUILD or do a REORGANISE. What I found out was, in the first iteration, there were 87 records which needed defrag.I ran the script and all the 87 indexes (clustered & nonclustered) were rebuilt or reindexed. When I got the stats from dm_db_index_physical_stats , there were still 27 records which needed defrag and all of theses were NON CLUSTERED Indexes. All the Clustered indexes were fixed. No matter how many times I run the script to defrag these records, I still have the same indexes to be defraged and most of them with the same fragmentation %. Nothing seems to change after this. Note: I did not perform any inserts/ updates/ deletes to the tables during these iterations. Still the Rebuild/reorganise did not result in any change. More information: Using SQL 2008 Script as available in msdn http://msdn.microsoft.com/en-us/library/ms188917.aspx Could you please explain why these 27 records of non clustered indexes are not being changed/ modified ? Any help on this would be highly appreciated. Nod

    Read the article

  • How to setup multiple Apache SSL sites using multiple IP addresses

    - by Jeff
    How do you setup a single Apache2 config to host multiple HTTPS sites each on their own IP address? There will also be multiple HTTP sites on just a single IP address. I do not want to use Server Name Indication (SNI) as described here, and I'm only concerned with the important top-level Apache directives. That is, I just need to know the skeleton of how my config should look. The basic setup looks like this: Hosted on 1.1.1.1:80 (HTTP) - example.com - example.net - example.org Hosted on 2.2.2.2:443 (HTTPS) - secure.com Hosted on 3.3.3.3:443 (HTTPS) - secure.net Hosted on 4.4.4.4:443 (HTTPS) - secure.org And here are the important config directives I have so far, which is the closest I've come to a working iteration, but still no dice. I know I'm close, just need a little push in the right direction. Listen 1.1.1.1:80 Listen 2.2.2.2:443 Listen 3.3.3.3:443 Listen 4.4.4.4:443 NameVirtualHost 1.1.1.1:80 NameVirtualHost 2.2.2.2:443 NameVirtualHost 3.3.3.3:443 NameVirtualHost 4.4.4.4:443 # HTTP VIRTUAL HOSTS: <VirtualHost 1.1.1.1:80> ServerName example.com DocumentRoot /home/foo/example.com </VirtualHost> <VirtualHost 1.1.1.1:80> ServerName example.net DocumentRoot /home/foo/example.net </VirtualHost> <VirtualHost 1.1.1.1:80> ServerName example.org DocumentRoot /home/foo/example.org </VirtualHost> # HTTPS VIRTUAL HOSTS: <VirtualHost 2.2.2.2:443> ServerName secure.com DocumentRoot /home/foo/secure.com SSLEngine on SSLCertificateFile /home/foo/ssl/secure.com.crt SSLCertificateKeyFile /home/foo/ssl/secure.com.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> <VirtualHost 3.3.3.3:443> ServerName secure.net DocumentRoot /home/foo/secure.net SSLEngine on SSLCertificateFile /home/foo/ssl/secure.net.crt SSLCertificateKeyFile /home/foo/ssl/secure.net.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> <VirtualHost 4.4.4.4:443> ServerName secure.org DocumentRoot /home/foo/secure.org SSLEngine on SSLCertificateFile /home/foo/ssl/secure.org.crt SSLCertificateKeyFile /home/foo/ssl/secure.org.key SSLCACertificateFile /home/foo/ssl/ca.txt </VirtualHost> For what it's worth, I prefer to have each of my SSL sites on their own IP instead of including one of them on the primary VHOST IP. Any links which show a standard setup would be more than welcome!

    Read the article

  • Session Report: What’s New in JSF: A Complete Tour of JSF 2.2

    - by Janice J. Heiss
    On Wednesday, Ed Burns, Consulting Staff Member at Oracle, presented a session, CON3870 -- “What’s New in JSF: A Complete Tour of JSF 2.2,” in which he provided an update on recent developments in JavaServer Faces 2.2. He began by emphasizing that, “JavaServer Faces 2.2 continues the evolution of the Java EE standard user interface technology. Like previous releases, this iteration is very community-driven and transparent.” He pointed out that since JSF was introduced at the 2001 JavaOne Keynote, it has had a long and successful run and has found a home in applications where the UI logic resides entirely on the server where the model and UI logic is. In such cases, the browser performs fairly simple functions. However, developers can take advantage of the power of browsers, something that Project Avatar is focused on by letting developers author their applications so the UI logic is running on the client and communicating to the back end via RESTful web services. “Most importantly,” remarked Burns, “JSF 2.2 offers a really good migration path because even in the scope of one application you could have an app written with JSF that has its UI logic on the server and, on a gradual basis, you could migrate parts of the app over to use client-side technologies. This can be done at any level of granularity – per page or per collection of pages. It all depends on what you want to do.” His presentation, which focused on the basic new features of JSF 2.2, began by restating the scope of JSF and encouraged attendees to check out Roger Kitain’s session: CON5133 “Techniques for Responsive Real-Time Web UIs.” Burns explained that JSF has endured because, “We still need web apps that are maintainable, localizable, quick to build, accessible, secure, look great and are fun to use.” It is used on every continent – the curious can go here to check out where its unofficial usage is tracked. He emphasized the significance of the UI logic being substantially on the server. This: Separates Component Semantics from Rendering, Allows components to “own” their little patch of the UI -- encode/decode, And offers a well-defined lifecycle: Inversion of Control. Burns reminded attendees that JSR-344, the spec for JSF 2.2, is now on Java Community Process 2.8, a revised version of the JCP that allows for more openness and transparency. He then offered some tools for community access to JSF 2.2:    * Public java.net projects spec http://jsf-spec.java.net/ impl http://jsf.java.net/ Open Source: GPL+Classpath Exception    * Mailing Lists [email protected]                                Public readable archive, JSPA signed member read/write [email protected]                                     Public readable archive, any java.net member read/write                         All mail sent to jsr344-experts is sent to users. * Issue Tracker spec http://jsf-spec.java.net/issues/ impl http://jsf.java.net/issues/ JSF 2.2, which is JSR 344, has a Public Review Draft planned by December 2012 with no need for a Renewal Ballot. The Early Draft Review of JSR 344 was published on December 8, 2011. Interested developers are encouraged to offer their input. Six Big Ticket Features of JSF 2.2 Burns summarized the six big ticket features of JSF 2.2:* HTML5 Friendly Markup Support Pass through attributes and elements * Faces Flows* Cross Site Request Forgery Protection* Loading Facelets via ResourceHandler* File Upload Component* Multi-Templating He explained that he called it “HTML 5 friendly” because there is really nothing HTML 5 specific about it -- it could be 4. But it enables developers to use new elements that are present in HTML5 without having a JSF component library that is written to take advantage of those specifically. It gives the page author the ability to use plain HTML5 to write their page, but to still take advantage of the server-side available in JSF. He presented a demo showing JSF 2.2’s ability to leverage the expressiveness of HTML5. Burns then explained the significance of face flows, which offer function points and quantify how much work has taken place, something of great value to JSF users. He went on to talk about JSF 2.2.’s cross-site request forgery protection (CSRF) and offered details about how it protects applications against attack. Then he talked about JSF 2.2’s File Upload Component and explained that the final specification will have Ajax and non-Ajax support. The current milestone has non-Ajax support implemented. He then went on to explain its capacity to add facelets through ResourceHandler. Previously, JSF 2.0 added Facelets and ResourceHandler as disparate units; now in JSF 2.2 the two concepts are unified. Finally, he explained the concept of multi-templating in JSF 2.2 and went on to discuss more medium-level features of the release. For an easy, low maintenance way of staying in touch with JSF developments go to JSF’s Twitter page where every month or so, important updates are offered.

    Read the article

  • “Being Agile” Means No Documentation, Right?

    - by jesschadwick
    Ask most software professionals what Agile is and they’ll probably start talking about flexibility and delivering what the customer wants.  Some may even mention the word “iterations”.  But inevitably, they’ll say at some point that it means less or even no documentation.  After all, doesn’t creating, updating, and circulating painstakingly comprehensive documentation that everyone and their mother have officially signed off on go against the very core of Agile?  Of course it does!  But really, they’re missing the point! Read The Agile Manifesto. (No, seriously - read it now. It’s short. I’ll wait.)  It’s essentially a list of values.  More specifically, it’s a right-side/left-side weighted list of values:  “Value this over that”. Many people seem to get the impression that this is really a “good vs. bad” list and that those values on the right side are evil and should essentially be tossed on the floor.  This leads to the conclusion that in order to be Agile we must throw away our fancy expensive tools, document as little as possible, and scoff at the idea of a project plan.  This conclusion is quite convenient because it essentially means “less work, more productivity!” (particularly in regards to the documentation and project planning).  I couldn’t disagree with this conclusion more. My interpretation of the Manifesto targets “over” as the operative word.  It’s not just a list of right vs. wrong or good vs. bad.  It’s a list of priorities.  In other words, none of the concepts on the list should be removed from your development lifecycle – they are all important… just not equally important.  This is not a unique interpretation, in fact it says so right at the end of the manifesto! So, the next time your team sits down to tackle that big new project, don’t make the first order of business to outlaw all meetings, documentation, and project plans.  Instead, collaborate with both your team and the business members involved (you do have business members sitting in the room, directly involved in the project planning, right?) and determine the bare minimum that will allow all of you to work and communicate in the best way possible.  This often means that you can pick and choose which parts of the Agile methodologies and process work for your particular project and end up with an amalgamation of Waterfall, Agile, XP, SCRUM and whatever other methodologies the members of your team have been exposed to (my favorite is “SCRUMerfall”). The biggest implication of this is that there is no one way to implement Agile.  There is no checklist with which you can tick off boxes and confidently conclude that, “Yep, we’re Agile™!”  In fact, depending on your business and the members of your team, moving to Agile full-bore may actually be ill-advised.  Such a drastic change just ends up taking everyone out of their comfort zone which they inevitably fall back into by the end of the project.  This often results in frustration to the point that Agile is abandoned altogether because “we just need to ship something!”  Needless to say, this is far more devastating to a project. Instead, I offer this approach: keep it simple and take it slow.  If your business members or customers are only involved at the beginning phases and nowhere to be seen until the project is delivered, invite them to your daily meetings; encourage them to keep up to speed on what’s going on on a daily basis and provide feedback.  If your current process is heavy on the documentation, try to reduce it as opposed to eliminating it outright.  If you need a “TPS Change Request” signed in triplicate with a 5-day “cooling off period” before a change is implemented, try a simple bug tracking system!  Tighten the feedback loop! Finally, at the end of every “iteration” (whatever that means to you, as long as it’s relatively frequent), take as much time as you can spare (even if it’s an hour or so) and perform some kind of retrospective.  Learn from your mistakes.  Figure out what’s working for you and what’s not, then fix it.  Before you know it you’ve got a handful of iterations and/or projects under your belt and you sit down with your team to realize that, “Hey, this is working - we’re pretty Agile!”  After all, Agile is a Zen journey.  It’s a destination that you aim for, not force, and even if you never reach true “enlightenment” that doesn’t mean your team can’t be exponentially better off from merely taking the journey.

    Read the article

  • Cant correctly install Lazarus

    - by user206316
    I have a little problem with installing and running Lazarus. I just upgrade ubuntu from 13.04 to 13.10. When i had 13.04, i could install lazarus without any problems, but in 13.10 lazarus magicaly dissapeared, and when i tried install it from ubuntu software center, it said something like in my software resources lazarus-ide-0.9.30.4 doesnt exist. After some research on net i tried delete all files from earlier installations, download deb packages from sourceforge and install them, but when i want to instal fpc-src, error shows up with output: (Reading database ... 100% (Reading database ... 239063 files and directories currently installed.) Unpacking fpc-src (from .../Stiahnut/Lazarus/fpc-src.deb) ... dpkg: error processing /home/richi/Stiahnut/Lazarus/fpc-src.deb (--install): trying to overwrite '/usr/share/fpcsrc/2.6.2/rtl/nativent/tthread.inc', which is also in package fpc-source-2.6.2 2.6.2-5 dpkg-deb (subprocess): decompressing archive member: internal gzip write error: Broken pipe dpkg-deb: error: subprocess <decompress> returned error exit status 2 dpkg-deb (subprocess): cannot copy archive member from '/home/richi/Stiahnut/Lazarus/fpc-src.deb' to decompressor pipe: failed to write (Broken pipe) when i started lazarus, it of course tell me that it cant find fpc compier and fpc sources. So, please, i really need program for school and i dont wanna reinstall os anymore or something like that :( (Ubuntu 13.10 64bit) P.S: im not skilled in linux so if u know some commands to fix it just write them for copy and paste :) P.P.S:Sorry for bad English, im Slovak xD P.P.P.S: Thank so much for any answers update: output from sudo dpkg -l | grep "^rc" richi@Richi-Ubuntu:~/lazarus1.0.12$ sudo dpkg -l | grep "^rc" rc account-plugin-generic-oauth 0.10bzr13.03.26-0ubuntu1.1 amd64 GNOME Control Center account plugin for single signon - generic OAuth rc appmenu-gtk:amd64 12.10.3daily13.04.03-0ubuntu1 amd64 Export GTK menus over DBus rc appmenu-gtk3:amd64 12.10.3daily13.04.03-0ubuntu1 amd64 Export GTK menus over DBus rc fp-compiler-2.6.0 2.6.0-9 amd64 Free Pascal - compiler rc fp-utils-2.6.0 2.6.0-9 amd64 Free Pascal - utilities rc lazarus-ide-0.9.30.4 0.9.30.4-4 amd64 IDE for Free Pascal - common IDE files rc lazarus-ide-1.0.10 1.0.10+dfsg-1 amd64 IDE for Free Pascal - common IDE files rc lcl-utils-0.9.30.4 0.9.30.4-4 amd64 Lazarus Components Library - command line build tools rc lcl-utils-1.0.10 1.0.10+dfsg-1 amd64 Lazarus Components Library - command line build tools rc libbamf3-1:amd64 0.4.0daily13.06.19~13.04-0ubuntu1 amd64 Window matching library - shared library rc libboost-filesystem1.49.0 1.49.0-4 amd64 filesystem operations (portable paths, iteration over directories, etc) in C++ rc libboost-signals1.49.0 1.49.0-4 amd64 managed signals and slots library for C++ rc libboost-system1.49.0 1.49.0-4 amd64 Operating system (e.g. diagnostics support) library rc libboost-thread1.49.0 1.49.0-4 amd64 portable C++ multi-threading rc libbrlapi0.5:amd64 4.4-8ubuntu4 amd64 braille display access via BRLTTY - shared library rc libcamel-1.2-40 3.6.4-0ubuntu1.1 amd64 Evolution MIME message handling library rc libcolumbus0-0 0.4.0daily13.04.16~13.04-0ubuntu1 amd64 error tolerant matching engine - shared library rc libdns95 1:9.9.2.dfsg.P1-2ubuntu2.1 amd64 DNS Shared Library used by BIND rc libdvbpsi7 0.2.2-1 amd64 library for MPEG TS and DVB PSI tables decoding and generating rc libebackend-1.2-5 3.6.4-0ubuntu1.1 amd64 Utility library for evolution data servers rc libedata-book-1.2-15 3.6.4-0ubuntu1.1 amd64 Backend library for evolution address books rc libedata-cal-1.2-18 3.6.4-0ubuntu1.1 amd64 Backend library for evolution calendars rc libgc1c3:amd64 1:7.2d-0ubuntu5 amd64 conservative garbage collector for C and C++ rc libgd2-xpm:amd64 2.0.36~rc1~dfsg-6.1ubuntu1 amd64 GD Graphics Library version 2 rc libgd2-xpm:i386 2.0.36~rc1~dfsg-6.1ubuntu1 i386 GD Graphics Library version 2 rc libgnome-desktop-3-4 3.6.3-0ubuntu1 amd64 Utility library for loading .desktop files - runtime files rc libgphoto2-2:amd64 2.4.14-2 amd64 gphoto2 digital camera library rc libgphoto2-2:i386 2.4.14-2 i386 gphoto2 digital camera library rc libgphoto2-port0:amd64 2.4.14-2 amd64 gphoto2 digital camera port library rc libgphoto2-port0:i386 2.4.14-2 i386 gphoto2 digital camera port library rc libgtksourceview-3.0-0:amd64 3.6.3-0ubuntu1 amd64 shared libraries for the GTK+ syntax highlighting widget rc libgweather-3-1 3.6.2-0ubuntu1 amd64 GWeather shared library rc libharfbuzz0:amd64 0.9.13-1 amd64 OpenType text shaping engine rc libibus-1.0-0:amd64 1.4.2-0ubuntu2 amd64 Intelligent Input Bus - shared library rc libical0 0.48-2 amd64 iCalendar library implementation in C (runtime) rc libimobiledevice3 1.1.4-1ubuntu6.2 amd64 Library for communicating with the iPhone and iPod Touch rc libisc92 1:9.9.2.dfsg.P1-2ubuntu2.1 amd64 ISC Shared Library used by BIND rc libkms1:amd64 2.4.46-1 amd64 Userspace interface to kernel DRM buffer management rc libllvm3.2:i386 1:3.2repack-7ubuntu1 i386 Low-Level Virtual Machine (LLVM), runtime library rc libmikmod2:amd64 3.1.12-5 amd64 Portable sound library rc libpackagekit-glib2-14:amd64 0.7.6-3ubuntu1 amd64 Library for accessing PackageKit using GLib rc libpoppler28:amd64 0.20.5-1ubuntu3 amd64 PDF rendering library rc libraw5:amd64 0.14.7-0ubuntu1.13.04.2 amd64 raw image decoder library rc librhythmbox-core6 2.98-0ubuntu5 amd64 support library for the rhythmbox music player rc libsdl-mixer1.2:amd64 1.2.12-7ubuntu1 amd64 Mixer library for Simple DirectMedia Layer 1.2, libraries rc libsnmp15 5.4.3~dfsg-2.7ubuntu1 amd64 SNMP (Simple Network Management Protocol) library rc libsyncdaemon-1.0-1 4.2.0-0ubuntu1 amd64 Ubuntu One synchronization daemon library rc libunity-core-6.0-5 7.0.0daily13.06.19~13.04-0ubuntu1 amd64 Core library for the Unity interface. rc libusb-0.1-4:i386 2:0.1.12-23.2ubuntu1 i386 userspace USB programming library rc libwayland0:amd64 1.0.5-0ubuntu1 amd64 wayland compositor infrastructure - shared libraries rc linux-image-3.8.0-19-generic 3.8.0-19.30 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-3.8.0-31-generic 3.8.0-31.46 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-19-generic 3.8.0-19.30 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc linux-image-extra-3.8.0-31-generic 3.8.0-31.46 amd64 Linux kernel image for version 3.8.0 on 64 bit x86 SMP rc screen-resolution-extra 0.15ubuntu1 all Extension for the GNOME screen resolution applet rc unity-common 7.0.0daily13.06.19~13.04-0ubuntu1 all Common files for the Unity interface.

    Read the article

  • Finding the problem on a partially succeeded build

    - by Martin Hinshelwood
    Now that I have the Build failing because of a genuine bug and not just because of a test framework failure, lets see if we can trace through to finding why the first test in our new application failed. Lets look at the build and see if we can see why there is a red cross on it. First, lets open that build list. On Team Explorer Expand your Team Project Collection | Team Project and then Builds. Double click the offending build. Figure: Opening the Build list is a key way to see what the current state of your software is.   Figure: A test is failing, but we can now view the Test Results to find the problem      Figure: You can quite clearly see that the test has failed with “The device is not ready”. To me the “The Device is not ready” smacks of a System.IO exception, but it passed on my local computer, so why not on the build server? Its a FaultException so it is most likely coming from the Service and not the client, so lets take a look at the client method that the test is calling: bool IProfileService.SaveDefaultProjectFile(string strComputerName) { ProjectFile file = new ProjectFile() { ProjectFileName = strComputerName + "_" + System.DateTime.Now.ToString("yyyyMMddhhmmsss") + ".xml", ConnectionString = "persist security info=False; pooling=False; data source=(local); application name=SSW.SQLDeploy.vshost.exe; integrated security=SSPI; initial catalog=SSWSQLDeployNorthwindSample", DateCreated = System.DateTime.Now, DateUpdated = System.DateTime.Now, FolderPath = @"C:\Program Files\SSW SQL Deploy\SampleData\", IsComplete=false, Version = "1.3", NewDatabase = true, TimeOut = 5, TurnOnMSDE = false, Mode="AutomaticMode" }; string strFolderPath = "D:\\"; //LocalSettings.ProjectFileBasePath; string strFileName = strFolderPath + file.ProjectFileName; try { using (FileStream fs = new FileStream(strFileName, FileMode.Create)) { DataContractSerializer serializer = new DataContractSerializer(typeof(ProjectFile)); using (XmlDictionaryWriter writer = XmlDictionaryWriter.CreateTextWriter(fs)) { serializer.WriteObject(writer, file); } } } catch (Exception ex) { //TODO: Log the exception throw ex; return false; } return true; } Figure: You can see on lines 9 and 18 that there are calls being made to specific folders and disks. What is wrong with this code? What assumptions mistakes could the developer have made to make this look OK: That every install would be to “C:\Program Files\SSW SQL Deploy” That every computer would have a “D:\\” That checking in code at 6pm because the had to go home was a good idea. lets solve each of these problems: We are in a web service… lets store data within the web root. So we can call “Server.MapPath(“~/App_Data/SSW SQL Deploy\SampleData”) instead. Never reference an explicit path. If you need some storage for your application use IsolatedStorage. Shelve your code instead. What else could have been done? Code review before check-in – The developer should have shelved their code and asked another dev to look at it. Use Defensive programming – Make sure that any code that has the possibility of failing has checks. Any more options? Let me know and I will add them. What do we do? The correct things to do is to add a Bug to the backlog, but as this is probably going to be fixed in sprint, I will add it directly to the sprint backlog. Right click on the failing test Select “Create Work Item | Bug” Figure: Create an associated bug to add to the backlog. Set the values for the Bug making sure that it goes into the right sprint and Area. Make your steps to reproduce as explicit as possible, but “See test” is valid under these circumstances.   Figure: Add it to the correct Area and set the Iteration to the Area name or the Sprint if you think it will be fixed in Sprint and make sure you bring it up at the next Scrum Meeting. Note: make sure you leave the “Assigned To” field blank as in Scrum team members sign up for work, you do not give it to them. The developer who broke the test will most likely either sign up for the bug, or say that they are stuck and need help. Note: Visual Studio has taken care of associating the failing test with the Bug. Save… Technorati Tags: WCF,MSTest,MSBuild,Team Build 2010,Team Test 2010,Team Build,Team Test

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >