Search Results

Search found 21659 results on 867 pages for 'welcome always'.

Page 411/867 | < Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >

  • Evaluating code for a graph [migrated]

    - by mazen.r.f
    This is relatively long code. Please take a look at this code if you are still willing to do so. I will appreciate your feedback. I have spent two days trying to come up with code to represent a graph, calculating the shortest path using Dijkstra's algorithm. But I am not able to get the right result, even though the code runs without errors. The result is not correct and I am always getting 0. I have three classes: Vertex, Edge, and Graph. The Vertex class represents the nodes in the graph and it has id and carried (which carry the weight of the links connected to it while using Dijkstra's algorithm) and a vector of the ids belong to other nodes the path will go through before arriving to the node itself. This vector is named previous_nodes. The Edge class represents the edges in the graph and has two vertices (one in each side) and a width (the distance between the two vertices). The Graph class represents the graph. It has two vectors, where one is the vertices included in this graph, and the other is the edges included in the graph. Inside the class Graph, there is a method named shortest() that takes the sources node id and the destination and calculates the shortest path using Dijkstra's algorithm. I think that it is the most important part of the code. My theory about the code is that I will create two vectors, one for the vertices in the graph named vertices, and another vector named ver_out (it will include the vertices out of calculation in the graph). I will also have two vectors of type Edge, where one is named edges (for all the edges in the graph), and the other is named track (to temporarily contain the edges linked to the temporary source node in every round). After the calculation of every round, the vector track will be cleared. In main(), I've created five vertices and 10 edges to simulate a graph. The result of the shortest path supposedly is 4, but I am always getting 0. That means I have something wrong in my code. If you are interesting in helping me find my mistake and making the code work, please take a look. The way shortest work is as follow: at the beginning, all the edges will be included in the vector edges. We select the edges related to the source and put them in the vector track, then we iterate through track and add the width of every edge to the vertex (node) related to it (not the source vertex). After that, we clear track and remove the source vertex from the vector vertices and select a new source. Then we start over again and select the edges related to the new source, put them in track, iterate over edges in track, adding the weights to the corresponding vertices, then remove this vertex from the vector vertices. Then clear track, and select a new source, and so on. #include<iostream> #include<vector> #include <stdlib.h> // for rand() using namespace std; class Vertex { private: unsigned int id; // the name of the vertex unsigned int carried; // the weight a vertex may carry when calculating shortest path vector<unsigned int> previous_nodes; public: unsigned int get_id(){return id;}; unsigned int get_carried(){return carried;}; void set_id(unsigned int value) {id = value;}; void set_carried(unsigned int value) {carried = value;}; void previous_nodes_update(unsigned int val){previous_nodes.push_back(val);}; void previous_nodes_erase(unsigned int val){previous_nodes.erase(previous_nodes.begin() + val);}; Vertex(unsigned int init_val = 0, unsigned int init_carried = 0) :id (init_val), carried(init_carried) // constructor { } ~Vertex() {}; // destructor }; class Edge { private: Vertex first_vertex; // a vertex on one side of the edge Vertex second_vertex; // a vertex on the other side of the edge unsigned int weight; // the value of the edge ( or its weight ) public: unsigned int get_weight() {return weight;}; void set_weight(unsigned int value) {weight = value;}; Vertex get_ver_1(){return first_vertex;}; Vertex get_ver_2(){return second_vertex;}; void set_first_vertex(Vertex v1) {first_vertex = v1;}; void set_second_vertex(Vertex v2) {second_vertex = v2;}; Edge(const Vertex& vertex_1 = 0, const Vertex& vertex_2 = 0, unsigned int init_weight = 0) : first_vertex(vertex_1), second_vertex(vertex_2), weight(init_weight) { } ~Edge() {} ; // destructor }; class Graph { private: std::vector<Vertex> vertices; std::vector<Edge> edges; public: Graph(vector<Vertex> ver_vector, vector<Edge> edg_vector) : vertices(ver_vector), edges(edg_vector) { } ~Graph() {}; vector<Vertex> get_vertices(){return vertices;}; vector<Edge> get_edges(){return edges;}; void set_vertices(vector<Vertex> vector_value) {vertices = vector_value;}; void set_edges(vector<Edge> vector_ed_value) {edges = vector_ed_value;}; unsigned int shortest(unsigned int src, unsigned int dis) { vector<Vertex> ver_out; vector<Edge> track; for(unsigned int i = 0; i < edges.size(); ++i) { if((edges[i].get_ver_1().get_id() == vertices[src].get_id()) || (edges[i].get_ver_2().get_id() == vertices[src].get_id())) { track.push_back (edges[i]); edges.erase(edges.begin()+i); } }; for(unsigned int i = 0; i < track.size(); ++i) { if(track[i].get_ver_1().get_id() != vertices[src].get_id()) { track[i].get_ver_1().set_carried((track[i].get_weight()) + track[i].get_ver_2().get_carried()); track[i].get_ver_1().previous_nodes_update(vertices[src].get_id()); } else { track[i].get_ver_2().set_carried((track[i].get_weight()) + track[i].get_ver_1().get_carried()); track[i].get_ver_2().previous_nodes_update(vertices[src].get_id()); } } for(unsigned int i = 0; i < vertices.size(); ++i) if(vertices[i].get_id() == src) vertices.erase(vertices.begin() + i); // removing the sources vertex from the vertices vector ver_out.push_back (vertices[src]); track.clear(); if(vertices[0].get_id() != dis) {src = vertices[0].get_id();} else {src = vertices[1].get_id();} for(unsigned int i = 0; i < vertices.size(); ++i) if((vertices[i].get_carried() < vertices[src].get_carried()) && (vertices[i].get_id() != dis)) src = vertices[i].get_id(); //while(!edges.empty()) for(unsigned int round = 0; round < vertices.size(); ++round) { for(unsigned int k = 0; k < edges.size(); ++k) { if((edges[k].get_ver_1().get_id() == vertices[src].get_id()) || (edges[k].get_ver_2().get_id() == vertices[src].get_id())) { track.push_back (edges[k]); edges.erase(edges.begin()+k); } }; for(unsigned int n = 0; n < track.size(); ++n) if((track[n].get_ver_1().get_id() != vertices[src].get_id()) && (track[n].get_ver_1().get_carried() > (track[n].get_ver_2().get_carried() + track[n].get_weight()))) { track[n].get_ver_1().set_carried((track[n].get_weight()) + track[n].get_ver_2().get_carried()); track[n].get_ver_1().previous_nodes_update(vertices[src].get_id()); } else if(track[n].get_ver_2().get_carried() > (track[n].get_ver_1().get_carried() + track[n].get_weight())) { track[n].get_ver_2().set_carried((track[n].get_weight()) + track[n].get_ver_1().get_carried()); track[n].get_ver_2().previous_nodes_update(vertices[src].get_id()); } for(unsigned int t = 0; t < vertices.size(); ++t) if(vertices[t].get_id() == src) vertices.erase(vertices.begin() + t); track.clear(); if(vertices[0].get_id() != dis) {src = vertices[0].get_id();} else {src = vertices[1].get_id();} for(unsigned int tt = 0; tt < edges.size(); ++tt) { if(vertices[tt].get_carried() < vertices[src].get_carried()) { src = vertices[tt].get_id(); } } } return vertices[dis].get_carried(); } }; int main() { cout<< "Hello, This is a graph"<< endl; vector<Vertex> vers(5); vers[0].set_id(0); vers[1].set_id(1); vers[2].set_id(2); vers[3].set_id(3); vers[4].set_id(4); vector<Edge> eds(10); eds[0].set_first_vertex(vers[0]); eds[0].set_second_vertex(vers[1]); eds[0].set_weight(5); eds[1].set_first_vertex(vers[0]); eds[1].set_second_vertex(vers[2]); eds[1].set_weight(9); eds[2].set_first_vertex(vers[0]); eds[2].set_second_vertex(vers[3]); eds[2].set_weight(4); eds[3].set_first_vertex(vers[0]); eds[3].set_second_vertex(vers[4]); eds[3].set_weight(6); eds[4].set_first_vertex(vers[1]); eds[4].set_second_vertex(vers[2]); eds[4].set_weight(2); eds[5].set_first_vertex(vers[1]); eds[5].set_second_vertex(vers[3]); eds[5].set_weight(5); eds[6].set_first_vertex(vers[1]); eds[6].set_second_vertex(vers[4]); eds[6].set_weight(7); eds[7].set_first_vertex(vers[2]); eds[7].set_second_vertex(vers[3]); eds[7].set_weight(1); eds[8].set_first_vertex(vers[2]); eds[8].set_second_vertex(vers[4]); eds[8].set_weight(8); eds[9].set_first_vertex(vers[3]); eds[9].set_second_vertex(vers[4]); eds[9].set_weight(3); unsigned int path; Graph graf(vers, eds); path = graf.shortest(2, 4); cout<< path << endl; return 0; }

    Read the article

  • Picasa v.3.6.2 for Mac is suddenly very slow - what are the implications of rebuilding my database?

    - by 3rdparty
    Recently Picasa v3.6.2 for Mac has become very sluggish - mainly noticable for any (non destructive) changes made to photos such as starring an image. This action used to be nearly immediate, but recently I've found it can take 1-3 seconds for Picasa to register the change, and become responsive again. I'm considering rebuilding my Picasa database as per these instructions - however I'm concerned I may lose any pre-existing non-destructive (unsaved) edits, along with Picasa albums that I have created. Curious if anyone has experienced Picasa sluggishness with the latest version and/or what their results have been from rebuilding their database. My last resort is to SuperDuper my drive and then rebuild the database, so I can always restore it if I lose critical data.

    Read the article

  • K-12 and Cloud considerations

    - by user736511
    Much like every other Public Sector organization, school districts in the US and Canada are under tremendous pressure to deliver consistent and modern services while operating with reduced budgets, IT personnel shortages, and staff attrition.  Electronic/remote learning and the need for immediate access to resources such as grades, calendars, curricula etc. are straining IT environments that were already burdened with meeting privacy requirements imposed by both regulators and parents/students.  One area viewed as a solution to at least some of the challenges is the use of "Cloud" in education.  Although the concept of "Cloud" is nothing new in education with many providers supplying educational material over the web, school districts defer previously-in-house-hosted services to established commercial vendors to accommodate document sharing, app hosting, and even e-mail.  Doing so, however, does not reduce an important risk, that of privacy.  As always, Cloud implementations are viewed in a skeptical manner because of the perceived reduction in sensitive data management and protection thereof, although with a careful approach and the right tooling, the benefits realized by Clouds can expand to security and privacy.   Oracle's comprehensive approach to data privacy and identity management ensures that the necessary tools are available to support regulations, operational efficiencies and strong security regardless of where the sensitive data is stored - on premise or a Cloud.  Common management tools, role-based access controls, access policy management and engineered systems provided by Oracle can be the foundational pieces on which school districts can build their Cloud implementations without having to worry about security itself. Their biggest challenge, and it is a positive one, is how to best take advantage of Oracle's DB Security and IDM functionality to reduce operational costs while enabling modern applications and data delivery to those who needs access to it. For more information please refer to http://www.oracle.com/us/products/middleware/identity-management/overview/index.html and http://www.oracle.com/us/products/database/security/overview/index.html.

    Read the article

  • Reading OpenDocument spreadsheets using C#

    - by DigiMortal
    Excel with its file formats is not the only spreadsheet application that is widely used. There are also users on Linux and Macs and often they are using OpenOffice and other open-source office packages that use ODF instead of OpenXML. In this post I will show you how to read Open Document spreadsheet in C#. Importer as example My previous post about importers showed you how to build flexible importers support to your web application. This post introduces you practical example of one of my importers. Of course, sensitive code is omitted. We start with ODS importer class and we add new methods as we go. public class OdsImporter : ImporterBase {     public OdsImporter()     {     }       public override string[] SupportedFileExtensions     {         get { return new[] { "ods" }; }     }       public override ImportResult Import(Stream fileStream, long companyId, short year)     {         string contentXml = GetContentXml(fileStream);           var result = new ImportResult();         var doc = XDocument.Parse(contentXml);           var rows = doc.Descendants("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-row").Skip(1);           foreach (var row in rows)         {             ImportRow(row, companyId, year, result);         }           return result;     } } The class given here just extends base class for importers (previous post uses interface but as I already told there you move to abstract base class when writing code for real projects). Import method reads data from *.ods file, parses it (it is XML), finds all data rows and imports data. As you may see then first row is skipped. This is because the first row on my sheet is always headers row. Reading ODS file Our import method starts with getting XML from *.ods file. ODS files like OpenXml files are zipped containers that contain different files. We need content.xml as all data is kept there. To get the contents of file we use SharpZipLib library to read uploaded file as *.zip file. private static string GetContentXml(Stream fileStream) {     var contentXml = "";       using (var zipInputStream = new ZipInputStream(fileStream))     {         ZipEntry contentEntry = null;         while ((contentEntry = zipInputStream.GetNextEntry()) != null)         {             if (!contentEntry.IsFile)                 continue;             if (contentEntry.Name.ToLower() == "content.xml")                 break;         }           if (contentEntry.Name.ToLower() != "content.xml")         {             throw new Exception("Cannot find content.xml");         }           var bytesResult = new byte[] { };         var bytes = new byte[2000];         var i = 0;           while ((i = zipInputStream.Read(bytes, 0, bytes.Length)) != 0)         {             var arrayLength = bytesResult.Length;             Array.Resize<byte>(ref bytesResult, arrayLength + i);             Array.Copy(bytes, 0, bytesResult, arrayLength, i);         }         contentXml = Encoding.UTF8.GetString(bytesResult);     }     return contentXml; } If here is content.xml file then we stop browsing the file. We read this file to memory and return it as UTF-8 format string. Importing rows Our last task is to import rows. We use special method for this as we have to handle some tricks here. To keep files smaller the cell count on row is not always the same. If we have more than one empty cell one after another then ODS keeps only one cell for sequential empty cells. This cell has attribute called number-columns-repeated and it’s value is set to the number of sequential empty cells. This is why we use two indexers for cells collection. private void ImportRow(XElement row, ImportResult result) {     var cells = (from c in row.Descendants()                 where c.Name == "{urn:oasis:names:tc:opendocument:xmlns:table:1.0}table-cell"                 select c).ToList();       var dto = new DataDto();       var count = cells.Count;     var j = -1;       for (var i = 0; i < count; i++)     {         j++;         var cell = cells[i];         var attr = cell.Attribute("{urn:oasis:names:tc:opendocument:xmlns:table:1.0}number-columns-repeated");         if (attr != null)         {             var numToSkip = 0;             if (int.TryParse(attr.Value, out numToSkip))             {                 j += numToSkip - 1;             }         }           if (i > 30) break;         if (j == 0)         {             dto.SomeProperty = cells[i].Value;         }         if (j == 1)         {             dto.SomeOtherProperty = cells[i].Value;         }         // some more data reading     }       // save data } You can define your own class for import results and add there all problems found during data import. Your application gets the results and shows them to user. Conclusion Reading ODS files may seem to complex task but actually it is very easy if we need only data from those documents. We can use some zip-library to get the content file and then parse it to XML. It is not hard to go through the XML but there are some optimization tricks we have to know. The code here is safe to use in web applications as it is not using any API-s that may have special needs to server and infrastructure.

    Read the article

  • Limit maximum incoming connections to a port using iptables

    - by Harley
    I have a server that has apache listening on a number of ports. Some ports are used for configuring the server, and another is used to download large files. My problem is that when I have a large number of clients downloading files, the web interface is uncontactable. I would like to limit the number of clients connecting on the "large file" port so that apache always has available connections to configure the server. A REJECT is fine, the client trying to download the file will back off and retry later. Each client only has one connection open to the server at a time, so limiting by IP won't work. I know I could put something in front of apache to manage this, but I'd really like to do it in iptables, without adding more software.

    Read the article

  • Connect to bluetooth device from command line

    - by Ilari Kajaste
    Background: I'm using my bluetooth headset as audio output. I managed to get it working by the long list of instructions on BluetoothHeadset community documentation, and I have automated the process of activating the headset as default audio output into a script, thanks to another question. However, since I use the bluetooth headset with both my phone and computer (and the headset doesn't support two input connections) in order for the phone not to "steal" the connection when handset is turned on, I force the headset into a discovery mode when connecting to the computer (phone gets to connect to it automatically). So even though the headset is paired ok and would in "normal" scenario autoconnect, I have to always use the little bluetooth icon in the notification area to actually connect to my device (see screenshot). What I want to avoid: This GUI for connecting to a known and paired bluetooth device: What I want instead: I'd want to make the bluetooth do exactly what the clicking the connect item in the GUI does, only by using command line. I want to use command line so I can make a single keypress shortcut for the action, and would't need to navigate the GUI every time I want to establish a connection to the device. The question: How can I attempt to connect to a specific, known and paired bluetooth device from command line? Further question: How do I tell if the connection was successful or not?

    Read the article

  • Restrictive routing best practices for Google App Engine with python?

    - by Aleksandr Makov
    Say I have a simple structure: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/profile', 'pages.profile'), (r'/dashboard', 'pages.dash'), ], debug=True) Basically all pages require authentication except for the login. If visitor tries to reach a restrictive page and he isn't authorized (or lacks privileges) then he gets redirected to the login view. The question is about the routing design. Should I check the auth and ACL privs in each of the modules (pages.profile and pages.dash from example above), or just pass all requests through the single routing mechanism: app = webapp2.WSGIApplication([ (r'/', 'pages.login'), (r'/.+', 'router') ], debug=True) I'm still quite new to the GAE, but my app requires authentication as well as ACL. I'm aware that there's login directive on the server config level, but I don't know how it works and how I can tight it with my ACL logic and what's worse I cannot estimate time needed to get it running. Besides, it looks only to provide only 2 user groups: admin and user. In any case, that's the configuration I use: handlers: - url: /favicon.ico static_files: static/favicon.ico upload: static/favicon.ico - url: /static/* static_dir: static - url: .* script: main.app secure: always Or I miss something here and ACL can be set in the config file? Thanks.

    Read the article

  • Best practices for App Idea ownership and shares

    - by JOG
    I am developing apps on my sparetime. I am the sole developer, and two non-programmer friends of mine provide vision, content, algorithms and ideas. We always agree happily on all the features, todos and prioritizations. But naturally, coding it is the biggest part. When selling, we agree on splitting profit equally, that is 33% each. But version 1.0 naturally does not sell much. And I go on to try to make the app more viral. This includes tons of stuff where the others are of little help. Examples: Adding support for sharing, facebook connect, gameifying, letting users add content, home page, support, maintenance, server services to make it easier for to update content. The list is long. Suddenly I will be doing 100% of a lot of work but only "own" a third of the income. My friends may either "fade out" of the project after 1.0, or they continue to contribute, but with less value and I would rather exchange them for more programmers or graphic designers. The effort they made to version 1.0 is worth a lot to the app and I realize I would have never done it without them. But I am doing all the work in the end. It is hard to negotiate about splitting 90, 5, 5 instead of 33% each, because the idea is still theirs. How to solve this? What are the best practices to regard the ownership of the app? What kind of agreements could I make that make it beneficial and motivational for me to continue developing the app?

    Read the article

  • Cannot execute Java program: UnsupportedClassVersionError

    - by Ricko Devian
    I have installed JDK 6, but I can't execute a Java program. For example, I have made test.java. I compile it with javac tes.java and there's no error when I compile it, but when I want to execute that program it always displays an error. I execute the Java program with java tes. Exception in thread "main" java.lang.UnsupportedClassVersionError: tes : Unsupported major.minor version 51.0 at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClass(ClassLoader.java:634) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) at java.net.URLClassLoader.defineClass(URLClassLoader.java:277) at java.net.URLClassLoader.access$000(URLClassLoader.java:73) at java.net.URLClassLoader$1.run(URLClassLoader.java:212) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:205) at java.lang.ClassLoader.loadClass(ClassLoader.java:321) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294) at java.lang.ClassLoader.loadClass(ClassLoader.java:266) Could not find the main class: tes. Program will exit. My javac version is 1.7.0, my java version is 1.6.0. Here is my tes.java code: class tes{ public static void main(String[]args){ System.out.println("hello"); } }

    Read the article

  • Why do computers get slower over time? [closed]

    - by Paperflyer
    Possible Duplicate: Why does hardware get slower with time? You probably know this: A newly bought computer is snappy and responsive and just really fast. Then you use it for a couple of months and slowly but steadily the computer gets slower. Opening programs now takes a long time, accessing files takes longer, everything just takes longer than it used to. If you wipe your hard drive and reinstall, everything is back to its original snappyness, but will deteriorate again. This always happend with any operating system I used. Worst of all Windows XP, but also with Ubuntu Linux, Fedora Linux, OSX 10.5/10.6, Windows Vista... (haven't used Win 7 long enough to confirm this) Do you know the reason for this? Or even, a cure?

    Read the article

  • Perth's ADF Community Event now an open invite

    - by Chris Muir
    Yesterday saw the next ADF Community Event in Perth, and as promised we grew from 15 to 25 attendees (which is going to cause a bit of a problem soon if we keep growing as we're going to run out of powerpoints for laptops). This bimonthly enjoyed presentations from Matthew Carrigy from the Dept of Finance WA on the ADF UI Shell, a small presentation from me about how Fusion Apps uses ADF, and a hands on based on programatically extending ADF BC to call external web services.  For Matt, his first presentation to a user group, with two live demos, all kudos to him for making it look smooth (for the record I hate live demos, I always break something) - thank you Matt! We've already lined up our speakers for the next event in November, and will be inviting yet more customers to this event.  However the event will now move to an open invite, so if you'd like your staff to attend please let me know by emailing chris DOT muir AT oracle DOT com. Alternatively I've had a fair few requests now for an "Intro to ADF" 1 day session so I'll consider this soon.  Certainly if you're interested let me know as this will help organize the event earlier rather than later. 

    Read the article

  • Which is generally considered faster or best practice: symlinks or Apache aliases?

    - by Christopher W. Allen-Poole
    I'm curious as to what most people's views are on this subject. Personally, I will almost always prefer symlinks unless I have no other option -- I find that it is far more obvious when someone is navigating the file system, but, on the other hand aliasing is more platform independent. Windows XP, for example, doesn't have anything remotely comparable to symlinks (NTFS junctions are not interpreted correctly by at least some environments), which means that anything which relies on symlinks in a *nix based system cannot be transferred. (I know that Windows 64x OS's have symlinks, but I've not seen if they can be read correctly by the environments previously mentioned) In addition to this, I was also wondering which is considered faster. Is this even possible to know? Do you have a conjecture? I would imagine that since symlinks are generally more low-level than Apache it would make sense that they would be referenced faster, but, on the other hand, I would guess that Apache is required to do a lookup in either case so it would be disk read dependent.

    Read the article

  • Multiple interfaces to one IP address?

    - by Delan Azabani
    At present, I have: a Netgear router with DHCP off at 192.168.0.1 my computer eth0 at 192.168.0.2 wlan0 at 192.168.0.2 The wlan0 interface always connects to the router, while the eth0 interface connects to other computers with crossover and acts as a dnsmasq DHCP server for network boot and installation. If I use the Gnome NetworkManager to enable both connections, that is, with wlan0 connected to the router/internet and eth0 to another computer, both as 192.168.0.2, I cannot access the internet while eth0 is connected. Why is this? How can I configure my computer to follow wlan0 for Internet usage, but use eth0 for itself (the latter is working but blocking wlan0).

    Read the article

  • What is an SSH key?

    - by acidzombie24
    I signed up for github and notice the ssh key option which looked interesting. I originally expected something like an ssl key (name, co name, etc). After going through it i notice i only put a password and it is always myuser@comp-name (this is windows). Why? I thought it was a user/pass id and i can create separate keys for separate purpose for privacy reasons. Now i see i am required to use one to create a repository. Also i see something about a 'private key file' when looking at options. What exactly is an SSH Key and how can i create a separate user without creating a separate login in windows.

    Read the article

  • What do you do if you reach a design dead-end in evolutionary methods like Agile or XP?

    - by Dipan Mehta
    As I was reading Martin Fowler's famous blog post Is Design Dead?, one of the striking impressions I got is that given the fact that in Agile Methodology and Extreme Programming, the design as well as programming is evolutionary, there are always points where things need to get refactored. It may be possible that when a programmer's level is good, and they understand design implications and don't make critical mistakes, the code continues to evolve. However, in a normal context, what is the ground reality in this context? In a normal day given some significant development goes into product, and when critical change occurs in requirement isn't it a constraint that how much ever we wish, fundamental design aspects cannot be modified? (without throwing away major part of the code). Is it not quite likely that one reaches dead-end on any further possible improvement on design and requirements? I am not advocating any non-Agile practice here, but I want to know from people who practice agile or iterative or evolutionary development methods, as for their real experiences. Have you ever reached such dead-ends? How have you managed to avoid it or escaped it? Or are there measures to ensure that design remains clean and flexible as it evolves?

    Read the article

  • How do I force specific permissions for new files/folders on Linux file server?

    - by humble_coder
    I'm having an issue with my install of Ubuntu 9.10 (file server) and its samba permissions. Logging in and reading works fine. However, creation of new directories by users restricts access for other users. For instance, if Bob (Windows user who maps the drive) creates a folder in the directory, Jane (Mac user that simply smb mounts) can read from it, but can't write to it -- and vice versa. I then must go CHMOD 777 the directory for everyone to be happy. I've tried editing the "create/directory mask", and "force" options in the smb.conf file but this doesn't seem to help. I'm about to resort to CRONTABing a recursive chmod routine, although I'm sure this isn't the fix. How do I get all new items to always be 777? Does anyone have any suggestions to fix this ever-occurring situation? Best

    Read the article

  • Tracking contributions from contributors not using git

    - by alex.jordan
    I have a central git repo located on a server. I have many contributors that are not tech savvy, do not have server access, and do not know anything about git. But they are able to contribute via the project's web side. Each of them logs on via a web browser and contributes to the project. I have set things up so that when they log on, each user's contributions are made into a cloned repo on the server that is specifically for that user. Periodically, I log on to the server, visit each of their repos, and do a git diff to make sure they haven't done anything bad. If all is well, I commit their changes and push them to the central repo. Of course I need to manually look at their changes so that I can add an appropriate commit message. But I would also like to track who made the changes. I am making the commit, and I (and the web server) are the only users that are actually writing anything to the server. I could track this in the commit messages. While this strikes me as wrong, if this is my only option, is there a way to make userx's cloned repo always include "userx: " before each commit message that I add, so that I do not have to remind myself which user's repo I am in? Or even better, is there an easy way for me to make the commit, but in such a way as I credit the user whose cloned repo I am in?

    Read the article

  • Fabric and cygwin don't work with windows UNC paths

    - by tcoopman
    I have some strange problems with fabric deployment to Windows Server 2008r2. The thing I try to accomplish is to copy some files to a shared folder with a fabric script (this script does a lot of other things too, but only this step gives me problems). This is the problem: When I try to access a UNC(Universal Naming convention) path I always get access denied kind of answers if I run the script in fabric. When I run the command in an ssh prompt (same user) it works fine. Examples: cmd: robocopy f:/.... //share result: in ssh this works fine, in fabric I get "Logon failure: the user has not been granted the requested logon type aat this computer." cmd: cd //share result: in ssh this works fine, in fabric I get "//share: Not a directory" Further information: uname -a and whoami return exact the same thing in fabric and ssh. I also tried things like mount, net use, but these commands all have kind of the same problem.

    Read the article

  • Outlook 2010 - Export of an Exchange OST to PST creates files with different sizes each time

    - by Jiri Pik
    This is a most weird issue. I have a couple of exchange OST mailboxes, and just for security, I am exporting them using File / Import / Export to a file / Export to PST file. If I run the export consecutively, it always creates files with different file sizes, WITH NO ERROR OR WARNING that something went wrong. The files should be of the same size as you run it right after the previous backup finished. I found out that if the filesize is substantially lower, then a reboot and back up can fix this up. What's your insight into this problem? What could cause that the files have different sizes and what could have caused that there is no warning? I suspected some Windows Search issue as sometimes the backup fails with a dialog error stating that Windows Search terminated the export.

    Read the article

  • How will my Electronic Engineering degree be received in the Canadian Game Development market? [closed]

    - by Harikawashi
    I have a Electronic Engineering with Computer Science Degree from a reputable South African university. The EE with CS degree is basically Electronic Engineering, with some of the high voltage subjects thrown out and replaced with computer science subjects - mostly quite theoretical, but not in too much depth. I went on to earn a Masters Degree in Digital Signal Processing, focussing on Speech Recognition in Educational Applications. I have always loved programming - I taught myself QBASIC when I was in primary school, I learned Java at school, did some low level C at University, and taught myself C# and Python while doing my post graduate degree. C# is currently my strong suit, I think I am pretty capable with it. I have two years work experience in Namibia - working as a consulting electrical engineer (no software content whatsoever) and also developing C# desktop applications for the company I work for. I would like to move to Canada next year and work in the Game Development Industry as programmer or software engineer. My interests in particular are towards the more mathematical applications, like game and physics engines, or statistical disciplines like artificial intelligence. However, these are passions - not areas in which I have any work experience. So the question: How well will my BEngEE&CS and MScEng be received in the game industry? Seeing as it's not a pure software degree and I have no official software development work experience?

    Read the article

  • How to make Microsoft Keyboard special keys run osascript commands on OSX?

    - by t-a-w
    I'm trying to make (1) special key open new terminal window. I bound it to file /Users/taw/bin/new_term, which contains: #!/bin/sh exec osascript -e 'tell application "Terminal" to do script "cd ."' This does the trick, except it also opens a Terminal window with this (even though Terminal.app is configured to always close windows when processes finish): Last login: Thu Mar 11 19:41:29 on ttys000 /Users/taw/bin/new_term ; exit; ~$ /Users/taw/bin/new_term ; exit; tab 1 logout [Process completed] How do I make it all work correctly? (possibly using a way different that what I've been attempting so far)

    Read the article

  • Improving server security [closed]

    - by Vicenç Gascó
    I've been developing webapps for a while ... and I always had a sysadmin which made the environment perfect to run my apps with no worries. But now I am starting a project on myself, and I need to set up a server, knowing near to nothing about it. All I need to do is just have a Linux, with a webserver (I usually used Apache), PHP and MySQL. I'll also need SSH, SSL to run https:// and FTP to transfer files. I know how to install almost everything (need advice about SSL) with Ubuntu Server, but I am concerned about the security topic ... say: firewall, open/closed ports, php security, etc ... Where can I found a good guide covering this topics? Everything else in the server... I don't need it, and I wanna know how to remove it, to avoid resources consumption. Final note: I'll be running the webapp at amazon-ec2 or rackspace cloud servers. Thanks in advance!!

    Read the article

  • How-to get the binding for a tab in the Dynamic Tab Shell Template

    - by Frank Nimphius
    The Dynamic Tab Shell template does expose a method on the Tab.java class that allows you to get access to the ADF binding container for a tab. At least in theory this works, because in practice this call always returns a null value (a bug is filed for this). To work around the problem, you can use code similar to the following to get the ADF binding for a specific tab DCBindingContainer currentBinding = (DCBindingContainer) BindingContext.getCurrent().getCurrentBindingsEntry(); DCBindingContainer templateBinding = (DCBindingContainer)currentBinding.get("ptb1"); DCBindingContainer tabBinding= (DCBindingContainer)templateBinding.get("r"+0);  In the code line above, the tabBinding variable will hold the binding reference to the first tab in the dynamic tab shell template. Note that the tab doesn't need to be visible for this (which has to do with how the template works).  "ptb1" is the template reference name in the PageDef file (Executable section) of the template consumer view. Check this string in your page before using this code. If it differs, change it also in the code above. "r0" is the binding reference of the first tab in the template. Te last tab is referenced by "r14".  

    Read the article

  • Access Control Service: Home Realm Discovery (HRD) Gotcha

    - by Your DisplayName here!
    I really like ACS2. One feature that is very useful is home realm discovery. ACS provides a Nascar style list as well as discovery based on email addresses. You can take control of the home realm selection process yourself by downloading the JSON feed or by manually setting the home realm parameter. Plenty of options – the only option missing is turning it off… In other words, when you setup your ACS namespace and realm and register identity provider, there is no way to keep the list of identity providers secret. An interested “user” can always retrieve all registered identity provider (using the browser or download the JSON feed). This may not be an issue with web identity providers, but when you use ACS to federate with customers or business partners, you maybe don’t want to disclose that list to the public (or to other customers). This is an adoption blocker for certain situations. I hope this feature will be added soon. In addition I would also like to see a feature I call “home realm aliases”. Some random string that I can use as a whr parameter instead of using the real issuer URI.

    Read the article

  • Bright Minds in Singapore: Oracle Graduate Hiring

    - by user769227
     Last week I was in Singapore and had the opportunity to take part in our graduate interviewing that we are currently undertaking as part of our ASEAN hiring. I always feel fortunate to get the chance to meet and talk with students in the APAC region and taking time to meet some of the students we interviewed in Singapore last week is no exception. The excitement and enthusiasm of many of the students that I spoke to last week really stands out but what really brought some of them to the forefront for me was their creative ways of thinking and the level of professionalism that I saw in the students. Some of the presentation and communication skills that I saw displayed would rival experienced IT Consultants in the industry.  We still have more interviews to follow up from last week, but I am confident that of the students we had the chance to meet last week some of them will go on to have bright and successful careers here at Oracle.  To all the students that came in and spent the day with us, I want to thank you for giving us your time and for sharing your thoughts and ideas with us. From a business perspective I think you all will go on and do great things and from a personal stand point I enjoyed many of the conversations I had and feel lucky to meet with you. Best of luck with the remainder of your interviews and I hope to see some of you in the halls on my next visit to Singapore.

    Read the article

< Previous Page | 407 408 409 410 411 412 413 414 415 416 417 418  | Next Page >