Search Results

Search found 5762 results on 231 pages for 'clients'.

Page 218/231 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • Can I write a test that succeeds if and only if a statement does not compile?

    - by Billy ONeal
    I'd like to prevent clients of my class from doing something stupid. To that end, I have used the type system, and made my class only accept specific types as input. Consider the following example (Not real code, I've left off things like virtual destructors for the sake of example): class MyDataChunk { //Look Ma! Implementation! }; class Sink; class Source { virtual void Run() = 0; Sink *next_; void SetNext(Sink *next) { next_ = next; } }; class Sink { virtual void GiveMeAChunk(const MyDataChunk& data) { //Impl }; }; class In { virtual void Run { //Impl } }; class Out { }; //Note how filter and sorter have the same declaration. Concrete classes //will inherit from them. The seperate names are there to ensure only //that some idiot doesn't go in and put in a filter where someone expects //a sorter, etc. class Filter : public Source, public Sink { //Drop objects from the chain-of-command pattern that don't match a particular //criterion. }; class Sorter : public Source, public Sink { //Sorts inputs to outputs. There are different sorters because someone might //want to sort by filename, size, date, etc... }; class MyClass { In i; Out o; Filter f; Sorter s; public: //Functions to set i, o, f, and s void Execute() { i.SetNext(f); f.SetNext(s); s.SetNext(o); i.Run(); } }; What I don't want is for somebody to come back later and go, "Hey, look! Sorter and Filter have the same signature. I can make a common one that does both!", thus breaking the semantic difference MyClass requires. Is this a common kind of requirement, and if so, how might I implement a test for it?

    Read the article

  • Define Javascript slider hit/rollover area

    - by Rob
    Hey, Im having an issue defining the hit area for a javascript sliding element. See example: http://www.warface.co.uk/clients/warface.co.uk/ Please slide over the grey box on the right side to reveal the button, although this works I would only like for the slider to only be triggered by rolling over the red block. CSS .slidingtwitter { /* -- This is the hit area -- */ background: #ccc; width:255px; height:55px; overflow: hidden; top:50%; right: 0px; /* -- This is the sliding start point -- */ position: fixed; font-family: Gotham, Sans-Serif; z-index: 50; } .slidingtwitter.right { right:0px; } .slidingtwitter .caption { /* -- This is the sliding area -- */ background: #fff; position: absolute; width:260px; height:55px; right: -205px; /* -- This is the sliding start point -- */ } .slidingtwitter a { color: #484848; font-size: 20px; text-transform: uppercase; } .slidingtwitter a:hover { color: black; } .slidingtwitter .smaller { font-size: 12px; font-family: Gotham Medium; } .twitterblock { background: #f35555 url("styles/images/button_twitter.png") no-repeat 14px 15px ; width:35px; height:35px; padding:10px; float:left; display:block; } .slidingtwitter .followme { background: url("styles/images/button_arrowheadthin.jpg")no-repeat right 0; height:35px; display:block; float:left; line-height:14px; width:140px; margin:10px 0px 0px 14px; padding-top:6px; padding-right: 40px; } JS $('.slidingtwitter').hover(function(){ $(".slide", this).stop().animate({right:'0px'},{queue:false,duration:400}); //Position on rollover },function() { $(".slide", this).stop().animate({right:'-205px'},{queue:false,duration:400}); //Position on rollout }); Any suggestions would be much appreciated.

    Read the article

  • Java consistent synchronization

    - by ring0
    We are facing the following problem in a Spring service, in a multi-threaded environment: three lists are freely and independently accessed for Read once in a while (every 5 minutes), they are all updated to new values. There are some dependencies between the lists, making that, for instance, the third one should not be read while the second one is being updated and the first one already has new values ; that would break the three lists consistency. My initial idea is to make a container object having the three lists as properties. Then the synchronization would be first on that object, then one by one on each of the three lists. Some code is worth a thousands words... so here is a draft private class Sync { final List<Something> a = Collections.synchronizedList(new ArrayList<Something>()); final List<Something> b = Collections.synchronizedList(new ArrayList<Something>()); final List<Something> c = Collections.synchronizedList(new ArrayList<Something>()); } private Sync _sync = new Sync(); ... void updateRunOnceEveryFiveMinutes() { final List<Something> newa = new ArrayList<Something>(); final List<Something> newb = new ArrayList<Something>(); final List<Something> newc = new ArrayList<Something>(); ...building newa, newb and newc... synchronized(_sync) { synchronized(_sync.a) { _synch.a.clear(); _synch.a.addAll(newa); } synchronized(_sync.b) { ...same with newb... } synchronized(_sync.c) { ...same with newc... } } // Next is accessed by clients public List<Something> getListA() { return _sync.a; } public List<Something> getListB() { ...same with b... } public List<Something> getListC() { ...same with c... } The question would be, is this draft safe (no deadlock, data consistency)? would you have a better implementation suggestion for that specific problem? update Changed the order of _sync synchronization and newa... building. Thanks

    Read the article

  • Problem with character encoding on email sent via PHP?

    - by cosgrove
    Hi everybody, Having some trouble sending properly formatted HTML e-mail from a PHP script. I am running PHP 5.3.0 and Apache 2.2.11 on Windows XP Professional. The output looks like this: Agent Summary for Support on Tuesday April 20 2010=20 Ext. Name Time Volume 137 Agent Name 01:27:25 1 138 =09 00:00:00 0 139 =09 00:00:00 0 You see the =20 and =09 in there? If you look at the HTML you also see = signs being turned into =3D. I figure this is a character encoding issue as I read the following at Wikipedia: ISO-8859-1 and Windows-1252 confusion It is very common to mislabel text data with the charset label ISO-8859-1, even though the data is really Windows-1252 encoded. In Windows-1252, codes between 0x80 and 0x9F are used for letters and punctuation, whereas they are control codes in ISO-8859-1. Many web browsers and e-mail clients will interpret ISO-8859-1 control codes as Windows-1252 characters in order to accommodate such mislabeling but it is not standard behaviour and care should be taken to avoid generating these characters in ISO-8859-1 labeled content. This looks like the problem but I don't know how to fix. My code looks like this: ob_start(); report_queue_summary($yesterday,$yesterday,$first_extension,$last_extension,$queue); $body_report = ob_get_contents(); ob_end_clean(); $body_footer = "This is an automatically generated e-mail."; $message = new Mail_mime(); $html = $body_header.$body_report.$body_footer; $message->setHTMLBody($html); $body = $message->get(); $extraheaders = array("From"=>"***redacted***","To"=>$recipient, "Subject"=>"Agent Summary for $yesterday [$queue]", "Content-type"=>"text/html; charset=iso-8859-1"); $headers = $message->headers($extraheaders); # setup e-mail; $host = "*********"; $port = "26"; $username = "*****"; $password = "*****"; # Send e-mail $smtp = Mail::factory('smtp', array ('host' => $host, 'port' => $port, 'auth' => true, 'username' => $username, 'password' => $password)); $mail = $smtp->send($recipient, $extraheaders, $body); if (PEAR::isError($mail)) { echo("" . $mail->getMessage() . ""); } else { echo("Message successfully sent!"); } Is the problem that I'm using output buffering?

    Read the article

  • What alternatives do I have for source control and does GIT does that?

    - by RubberDuck
    I work as a freelancer programmer for some clients and also create apps for myself. When I work for myself, obviously I work alone. I generally don't work in a linear way. My big problems today are: I have a lot of apps that use the same classes I have developed; In the past, I put all these common classes on a directory outside all projects and included them on my apps using absolute paths, but this method sucks because by accident (if you forget) you may change a path or the disk and all projects are broken. Then I decided to copy those classes to my projects every time. Because the majority of these classes do not change frequently, I am relatively ok, but when they change, I am in hell; When I change one of these classes I have to propagate the changes to all other apps using copies of them. I have also tried to create frameworks but thanks to Apple, I cannot create frameworks for iOS and have to create libraries and bundles and create a nightmare of paths from one to the other and to the project to make that sh!t works. So, I am done with frameworks/libraries on Xcode until Xcode is a decent IDE. So, I see I need something better to manage my source code. What I need is this (I never used GIT on Xcode. I have read Apple docs but I still have these points): does git locally on Xcode allows me to deal with assets or just code? Can I have the equivalent of a "framework" (code + assets) managed by git locally? Can an entire xcodeproj be managed as a unity? I mean, Suppose I have a xcodeproj created and want GIT to manage it. How do I enable git on a project that was created without it and start designating files for management. (I have enabled git on Xcode's preferences, but all source control menu is grayed out). Is git the best option? Do I have another? Remember that my main condition is that the files should stay on the local computer. Please save me (I am a bit dramatic today). Thanks.

    Read the article

  • organizing external libraries and include files

    - by stijn
    Over the years my projects use more and more external libraries, and the way I did it starts feeling more and more awkward (although, that has to be said, it does work flawlessly). I use VS on Windows, CMake on others, and CodeComposer for targetting DSPs on Windows. Except for the DSPs, both 32bit and 64bit platforms are used. Here's a sample of what I am doing now; note that as shown, the different external libraries themselves are not always organized in the same way. Some have different lib/include/src folders, others have a single src folder. Some came ready-to-use with static and/or shared libraries, others were built /path/to/projects /projectA /projectB /path/to/apis /apiA /src /include /lib /apiB /include /i386/lib /amd64/lib /path/to/otherapis /apiC /src /path/to/sharedlibs /apiA_x86.lib -->some libs were built in all possible configurations /apiA_x86d.lib /apiA_x64.lib /apiA_x64d.lib /apiA_static_x86.lib /apiB.lib -->other libs have just one import library /path/to/dlls -->most of this directory also gets distributed to clients /apiA_x86.dll and it's in the PATH /apiB.dll Each time I add an external libary, I roughly use this process: build it, if needed, for different configurations (release/debug/platform) copy it's static and/or import libraries to 'sharedlibs' copy it's shared libraries to 'dlls' add an environment variable, eg 'API_A_DIR' that points to the root for ApiA, like '/path/to/apis/apiA' create a VS property sheet and a CMake file to state include path and eventually the library name, like include = '$(API_A_DIR)/Include' and lib = apiA.lib add the propertysheet/cmake file to the project needing the library It's especially step 4 and 5 that are bothering me. I am pretty sure I am not the only one facing this problem, and would like see how others deal with this. I was thinking to get rid of the environment variables per library, and use just one 'API_INCLUDE_DIR' and populating it with the include files in an organized way: /path/to/api/include /apiA /apiB /apiC This way I do not need the include path in the propertysheets nor the environment variables. For libs that are only used on windows I even don't need a propertysheet at all as I can use #pragmas to instruct the linker what library to link to. Also in the code it will be more clear what gets included, and no need for wrappers to include files having the same name but are from different libraries: #include <apiA/header.h> #include <apiB/header.h> #include <apiC_version1/header.h> The withdrawal is off course that I have to copy include files, and possibly** introduce duplicates on the filesystem, but that looks like a minor price to pay, doesn't it? ** actually once libraries are built, the only thing I need from them is the include files and thie libs. Since each of those would have a dedicated directory, the original source tree is not needed anymore so can be deleted..

    Read the article

  • IE7 and 8 Hangs Randomly on CSS Images

    - by BJ Safdie
    We have an ASP.NET 3.5 application that has been in production for over a year. Our last release was a couple of months ago. We use CSS for styling and application of background images to divs and such. The server is Windows 2003 with IIS. Suddenly, this week, we have had reports from some users that the page seems to hang up while loading. The status bar was showing the name of a background image used in the page main area (assigned in CSS). At our office, some of us could recreate the problem, while others could not. IE6 and Firefox do not seem to be affected, only IE7/8. Running Fiddler on an affected machine and trying to see what was happening with the requests seemed to make the problem go away (while running through Fiddler, it returned when not). Hitting Refresh on a hung load often made the page load just fine. I checked the background image, and even replaced it with an archived copy. No joy. We re-deployed the app from our production source. No Joy. We restarted IIS and eventually rebooted the whole server. There are no unusual entries in the event logs, the app logs or the IIS logs. Finally, I removed the image entirely and re-styled the page not to use a background image. That solved the problem at least for now. However, we have reports of other images "hanging." The images are PNGs, but I have heard some rumors that sometimes a GIF hangs, but I have no screenshot to confirm. This just started happening "out of the blue." There have been no releases or updates applied to the server recently. We even checked updates on clients to see if a recent Windows Update might have caused this on the client, but there was nothing updated within the last couple of weeks. If you have any information about this problem, I would love to hear from you. I would also greatly appreciate any recommendations on additional diagnostics we can try.

    Read the article

  • DataGridView live display of datatable using virtual mode

    - by Chris
    I have a DataGridView that will display records (log entries) from a database. The amount of records that can exist at a time is very large. I would like to use the virtual mode feature of the DataGridView to display a page of data, and to minimize the amount of data that has to be transferred across a network at a given time. Polling for data is out of the question. There will be several clients running at a time, all of which are on the same network and viewing the records. If they all poll for data, the network will run very slowly. The data is read-only to the user; they won't be able to edit any of it, just view it. I need to know when updates occur in the database, and I need to update the screen with those updates accordingly using virtual mode. If a page of data a user is viewing contains data that has change, he/she will see those updates on that page. If updates were made to data in the database, but not in the data the user is viewing, then not much really changes on the user screen (Maybe just the scroll bar if records were added or removed). My current approach is using SQL server change tracking with the sync framework. Each client has a local SQL Server CE instance and database file that is kept in sync with the main database server. I use the information from the synchronization event to see if any changes were made to the main database and were sync'ed to the client. I need to use the DataGridView virtual mode here because I can't have thousands of records loaded into the DataGridView at once, otherwise memory usage goes through the roof. The main challenge right now is knowing how to use virtual mode to provide a seamless experience to the user by allowing them to scroll up and down through the records, and also have records update on the fly without interfering with the user inappropriately. Has anybody dealt with this issue before, and if so, where I can see how they did it? I've gone through some of the MSDN documentation and examples on virtual mode. So far, I haven't found documentation and/or examples on their site that explains how to do what I am trying to accomplish.

    Read the article

  • REST and links: middle ground?

    - by pbean
    I've been wondering about how far to go with links in REST. Consider books which have authors, but there is obviously a many-to-many relationship between books an authors (a book can be written by multiple authors, and authors can write multiple books). So let's say we have a rest call http://server/book/21, which will return a book XML, containing information about an author. Now since the book is a resource, and the author is a resource, the XML should not straight up include all the author information. It should contain a link to the author information. But which of the below two examples is more widely accepted? (Excuse my crappy formatted XML, I am not that experienced with hand writing XML) <book> <title>Some Book</title> <authors> <author link="http://server/author/82">Some Guy</author> <author link="http://server/author/51">Some Other Guy</author> </authors> </book> Then, an author link would return more information: <author> <name>Some Guy</name> <dateOfBirth>some time</dateOfBirth> </author> Or: <book> <title>Some Book</title> <authors>http://server/book/21/authors</authors> </book> Where http://server/book/21/authors returns: <authors> <author link="http://server/author/82">Some Guy</author> <author link="http://server/author/51">Some Other Guy</author> </authors> And then each of those returns the former <author> example again. The reason I'm asking is basically because at my job they went with the second approach, and it seems to me that clients have to take many more steps to reach where they want to go. Also, for basic information which "you're always going to need" (author's name), you do have to take one additional step. On the other hand, that way the book resource only returns information about the book (nothing else), and to get anything else, you have to access other resources.

    Read the article

  • non blocking TCP-acceptor not reading from socket

    - by Abruzzo Forte e Gentile
    I have the code below implementing a NON-Blocking TCP acceptor. Clients are able to connect without any problem and the writing seems occurring as well, but the acceptor doesn't read anything from the socket and the call to read() blocks indefinitely. Am I using some wrong setting for the acceptor? Kind Regards AFG int main(){ create_programming_socket(); poll_programming_connect(); while(1){ poll_programming_read(); } } int create_programming_socket(){ int cnt = 0; p_listen_socket = socket( AF_INET, SOCK_STREAM, 0 ); if( p_listen_socket < 0 ){ return 1; } int flags = fcntl( p_listen_socket, F_GETFL, 0 ); if( fcntl( p_listen_socket, F_SETFL, flags | O_NONBLOCK ) == -1 ){ return 1; } bzero( (char*)&p_serv_addr, sizeof(p_serv_addr) ); p_serv_addr.sin_family = AF_INET; p_serv_addr.sin_addr.s_addr = INADDR_ANY; p_serv_addr.sin_port = htons( p_port ); if( bind( p_listen_socket, (struct sockaddr*)&p_serv_addr , sizeof(p_serv_addr) ) < 0 ) { return 1; } listen( p_listen_socket, 5 ); return 0; } int poll_programming_connect(){ int retval = 0; static socklen_t p_clilen = sizeof(p_cli_addr); int res = accept( p_listen_socket, (struct sockaddr*)&p_cli_addr, &p_clilen ); if( res > 0 ){ p_conn_socket = res; int flags = fcntl( p_conn_socket, F_GETFL, 0 ); if( fcntl( p_conn_socket, F_SETFL, flags | O_NONBLOCK ) == -1 ){ retval = 1; }else{ p_connected = true; } }else if( res == -1 && ( errno == EWOULDBLOCK || errno == EAGAIN ) ) { //printf( "poll_sock(): accept(c_listen_socket) would block\n"); }else{ retval = 1; } return retval; } int poll_programming_read(){ int retval = 0; bzero( p_buffer, 256 ); int numbytes = read( p_conn_socket, p_buffer, 255 ); if( numbytes > 0 ) { fprintf( stderr, "poll_sock(): read() read %d bytes\n", numbytes ); pkt_struct2_t tx_buf; int fred; int i; } else if( numbytes == -1 && ( errno == EWOULDBLOCK || errno == EAGAIN ) ) { //printf( "poll_sock(): read() would block\n"); } else { close( p_conn_socket ); p_connected = false; retval = 1; } return retval; }

    Read the article

  • What libraries provide cross-platform 3D and P2P support?

    - by uckelman
    I'm trying to find a constellation of libraries which, taken together, meet the following requirements: Smooth scaling, rotation, panning (in two dimensions). I'll have a large bitmap (or SVG, in some cases), maybe up to 10000x10000 pixels, which serves as map, with some middling number of small bitmaps (or, again, possibly SVG) that can be dragged around over it. I need to be able to zoom, rotate, and pan this scene; however, the view will always be normal to (i.e., looking head-on at) the large bitmap, so I'm not really using the depth dimension. Peer-to-peer. I'd like for multiple users to be able to connect in order to share one of the scenes mentioned above, preferably peer-to-peer, without much configuration by the user. I'm intending to have a server running for cases where users are unable to connect P2P; I'd like to have the failover happen automatically, or possibly have some way of promoting clients who are capable to be servers themselves. Synchronization. Once a user has started dragging one of the small bitmaps (a piece), no other user should be able to drag that piece until the drag stops. I haven't thought of exactly how to do this---there might be a simple solution, or this kind of synchronization might be something that a library provides. Cross(ish)-platform. I need to be able to run on Linux, Windows, and Mac OS. It would be nice to also be able to run on tablets. Having mostly the same code for all platforms is a plus, but not absolutely necessary. (L)GPL compatible. I'm planning to release under the LGPL or GPL, preferably the latter, so I need libraries which have compatible licenses. I'm not set on any particular language, I'd like to use the library or libraries which make the work easiest, though my preference is to work in at most two languages for the project. (The Model could potentially be in one language and the View in another, so they could talk to each other via some protocol I define, if that would get me a better selection of libraries to use.) Can anyone offer suggestions for what to use?

    Read the article

  • What do I need to distribute (keys, certs) for Python w/ SSL-socket connection?

    - by fandingo
    I'm trying to write a generic server-client application that will be able to exchange data amongst servers. I've read over quite a few OpenSSL documents, and I have successfully setup my own CA and created a cert (and private key) for testing purposes. I'm stuck with Python 2.3, so I can't use the standard "ssl" library. Instead, I'm stuck with PyOpenSSL, which doesn't seem bad, but there aren't many documents out there about it. My question isn't really about getting it working. I'm more confused about the certificates and where they need to go. Here are my two programs that do work: Server: #!/bin/env python from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER|SSL.VERIFY_FAIL_IF_NO_PEER_CERT, verify_cb) # ?????? ctx.use_privatekey_file('./Dmgr-key.pem') ctx.use_certificate_file('Dmgr-cert.pem') # ?????? ctx.load_verify_locations('./CAcert.pem') server = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) server.bind(('', 50000)) server.listen(3) a, b = server.accept() c = a.recv(1024) print(c) Client: from OpenSSL import SSL import socket import pickle def verify_cb(conn, cert, errnum, depth, ok): print('Got cert: %s' % cert.get_subject()) return ok ctx = SSL.Context(SSL.TLSv1_METHOD) ctx.set_verify(SSL.VERIFY_PEER, verify_cb) # ?????????? ctx.use_privatekey_file('/home/justin/code/work/CA/private/Dmgr-key.pem') ctx.use_certificate_file('/home/justin/code/work/CA/Dmgr-cert.pem') # ????????? ctx.load_verify_locations('/home/justin/code/work/CA/CAcert.pem') sock = SSL.Connection(ctx, socket.socket(socket.AF_INET, socket.SOCK_STREAM)) sock.connect(('10.0.0.3', 50000)) a = Tester(2, 2) b = pickle.dumps(a) sock.send("Hello, world") sock.flush() sock.send(b) sock.shutdown() sock.close() I found this information from ftp://ftp.pbone.net/mirror/ftp.pld-linux.org/dists/2.0/PLD/i586/PLD/RPMS/python-pyOpenSSL-examples-0.6-2.i586.rpm which contains some example scripts. As you might gather, I don't fully understand the sections between the " # ????????." I don't get why the certificate and private key are needed on both the client and server. I'm not sure where each should go, but shouldn't I only need to distribute one part of the key (probably the public part)? It undermines the purpose of having asymmetric keys if you still need both on each server, right? I tried alternating removing either the pkey or cert on either box, and I get the following error no matter which I remove: OpenSSL.SSL.Error: [('SSL routines', 'SSL3_READ_BYTES', 'sslv3 alert handshake failure'), ('SSL routines', 'SSL3_WRITE_BYTES', 'ssl handshake failure')] Could someone explain if this is the expected behavior for SSL. Do I really need to distribute the private key and public cert to all my clients? I'm trying to avoid any huge security problems, and leaking private keys would tend to be a big one... Thanks for the help!

    Read the article

  • Wordpress Search Results in Order

    - by Brad Houston
    One of my clients websites, www.kevinsplants.co.uk is not showing the search results in alphabetical order, how do I go about ordering the results in alphabetical order? We are using the Shopp plugin and I believe its that plugin that is generating the results! Cheers, Brad case "orderby-list": if (isset($Shopp->Category->controls)) return false; if (isset($Shopp->Category->smart)) return false; $menuoptions = Category::sortoptions(); $title = ""; $string = ""; $default = $Shopp->Settings->get('default_product_order'); if (empty($default)) $default = "title"; if (isset($options['default'])) $default = $options['default']; if (isset($options['title'])) $title = $options['title']; if (value_is_true($options['dropdown'])) { if (isset($Shopp->Cart->data->Category['orderby'])) $default = $Shopp->Cart->data->Category['orderby']; $string .= $title; $string .= '<form action="'.esc_url($_SERVER['REQUEST_URI']).'" method="get" id="shopp-'.$Shopp->Category->slug.'-orderby-menu">'; if (!SHOPP_PERMALINKS) { foreach ($_GET as $key => $value) if ($key != 'shopp_orderby') $string .= '<input type="hidden" name="'.$key.'" value="'.$value.'" />'; } $string .= '<select name="shopp_orderby" class="shopp-orderby-menu">'; $string .= menuoptions($menuoptions,$default,true); $string .= '</select>'; $string .= '</form>'; $string .= '<script type="text/javascript">'; $string .= "jQuery('#shopp-".$Shopp->Category->slug."-orderby-menu select.shopp-orderby-menu').change(function () { this.form.submit(); });"; $string .= '</script>'; } else { if (strpos($_SERVER['REQUEST_URI'],"?") !== false) list($link,$query) = explode("\?",$_SERVER['REQUEST_URI']); $query = $_GET; unset($query['shopp_orderby']); $query = http_build_query($query); if (!empty($query)) $query .= '&'; foreach($menuoptions as $value => $option) { $label = $option; $href = esc_url($link.'?'.$query.'shopp_orderby='.$value); $string .= '<li><a href="'.$href.'">'.$label.'</a></li>'; } } return $string; break;

    Read the article

  • Which MS technologies would be suited for a data intensive application?

    - by steve.tse
    I'm a junior VB.net developer with little application design knowledge. I've been reading a lot of material online regarding different design patterns, frameworks, and methodologies. It's become a bit confusing for me. Right now I'm trying to decide on what language would be best suited to convert an existing VB6 application (with SQL server backend.) I need to update the UI and add more user functionality and reporting capabilities. Initially I was thinking of using WPF and attempting the MVVM model for this big project. Reports would be generated from SSRS. A peer suggested using ASP.net and I don't have enough experience to determine what would be better. The senior programmers here are stuck on using VB6 and don't have any input on what to use. They are encouraging me to use the latest technologies. This application would be for ~20 users in a central location. Ideally I would stick to a Microsoft .net language. Current interface is similar to a datagrid table where the user would click in to see the detail of each record. They would need to have multiple records open at any given time. I look forward to all the advice I can get. EDIT 2010/04/22 2:47 PM EST What is your audience? Internal clients within an intranet How complex are the interactions you expect to implement? not very... displaying data from SQL server to UI. Allow user updates to said data. Typically just one user modifying a record. Do you require near real-time data updates? no How often do you expect to update the application after the first release? twice/year Do you expect a well-defined set of client platforms? Yes, windows xp environment, potentially upgrading to Win7. Currently in IE.6 moving to IE7 or 8 within a couple of months. Do users need access from anywhere? No, just from their PC.

    Read the article

  • jQuery - Save to SQL via PHP

    - by Kenny Bones
    This is probably easy for you guys, but I can't understand it. I want to save the filename of an image to it's own row in the SQL base. Basically, I log on to the site where I have my own userID. And each user has its own column for background images. And the user can choose his own image if he wants to. So basically, when the user clicks on the image he wants, a jquery click event occurs and an ajax call is made to a php file which is supposed to take care of the actual update. The row for each user always exist so there's only an update of the data that's necessary. First, I collect the filename of the css property 'background-image' and split it so I get only the filename. I then store that filename in a variable I call 'filename' which is then passed on to this jQuery snippet: $.ajax({ url: 'save_to_db.php', data: filename, dataType:'Text', type: 'POST', success: function(data) { // Just for testing purposes. alert('Background changed to: ' + data); } }); And this is the php: <?php require("dbconnect.php") ?> <?php $uploadstring = ($_POST['filename']); mysql_query("UPDATE brukere SET brukerBakgrunn = $uploadstring WHERE brukerID=" .$_SESSION['id'] .""; mysql_close(); ?> Basically, each user has their own ID and this is called 'brukerID' The table everything is in is called 'brukere' and the column I'm supposed to update is the one called 'brukerBakgrunn' When I just run the javascript snippet, I get this message box in return where it says: Background changed to: Parse error: syntax error, unexpected ';' in /var/www/clients/client2/web8/web/save_to_db.php on line 8 I actualle get this messagebox twice, not sure why. Line 8 in 'save_to_db.php' is this one: mysql_query("UPDATE brukere SET brukerBakgrunn = $uploadstring WHERE brukerID=" .$_SESSION['id'] .""; Not sure if you need to see db_connect.php as well. I can add that later if you need to see it. So what am I missing here?

    Read the article

  • git repository sync between computers, when moving around?

    - by Johan
    Hi Let's say that I have a desktop pc and a laptop, and sometimes I work on the desktop and sometimes I work on the laptop. What is the easiest way to move a git repository back and forth? I want the git repositories to be identical, so that I can continue where I left of at the other computer. I would like to make sure that I have the same branches and tags on both of the computers. Thanks Johan Note: I know how to do this with SubVersion, but I'm curious on how this would work with git. If it is easier, I can use a third pc as classical server that the two pc:s can sync against. Note: Both computers are running Linux. Update: So let's try XANI:s idea with a bare git repo on a server, and the push command syntax from KingCrunch. In this example there is two clients and one server. So let's create the server part first. ssh user@server mkdir -p ~/git_test/workspace cd ~/git_test/workspace git --bare init So then from one of the other computers I try to get a copy of the repo with clone: git clone user@server:~/git_test/workspace/ Initialized empty Git repository in /home/user/git_test/repo1/workspace/.git/ warning: You appear to have cloned an empty repository. Then go into that repo and add a file: cd workspace/ echo "test1" > testfile1.txt git add testfile1.txt git commit testfile1.txt -m "Added file testfile1.txt" git push origin master Now the server is updated with testfile1.txt. Anyway, let's see if we can get this file from the other computer. mkdir -p ~/git_test/repo2 cd ~/git_test/repo2 git clone user@server:~/git_test/workspace/ cd workspace/ git pull And now we can see the testfile. At this point we can edit it with some more content and update the server again. echo "test2" >> testfile1.txt git add testfile1.txt git commit -m "Test2" git push origin master Then we go back to the first client and do a git pull to see the updated file. And now I can move back and forth between the two computers, and add a third if I like to.

    Read the article

  • MySQL Database Query - Codeigniter

    - by user2450349
    I am building an application with Codeigniter and need some help with a DB query. I have a table called users with the following fields: user_id, user_name, user_password, user_email, user_role, user_manager_id In my app, I pull all records from the user table using the following: function get_clients() { $this->db->select('*'); $this->db->where('user_role', 'client'); $this->db->order_by("user_name", "Asc"); $query = $this->db->get("users"); return $query->result_array(); } This works as expected, however when I display the results in the view, I also want to display a new column called Manager which will display the managers user_name field. The user_manager_id is the id of the user from the same table. Im guessing you can create an outer join on the same table but not sure. In the view, I am displaying the returned info as follows: <table class="table table-striped" id="zero-configuration"> <thead> <tr> <th>Name</th> <th>Email</th> <th>Manager</th> </tr> </thead> <tbody> <?php foreach($clients as $row) { ?> <tr> <td><?php echo $row['user_name']; ?> (<?php echo $row['user_username']; ?>)</td> <td><?php echo $row['user_email']; ?></td> <td><?php echo $row['???']; ?></td> </tr> <?php } ?> </tbody> </table> Any idea of how I can form the query and display the manager name is the view? Example: user_id user_name user_password user_email user_role user_manager_id 1 Ollie adjjk34jcd [email protected] client null 2 James djklsdfsdjk [email protected] client 1 When i query the database, i want to display results like this: Ollie [email protected] James [email protected] Ollie

    Read the article

  • A Security (encryption) Dilemma

    - by TravisPUK
    I have an internal WPF client application that accesses a database. The application is a central resource for a Support team and as such includes Remote Access/Login information for clients. At the moment this database is not available via a web interface etc, but one day is likely to. The remote access information includes the username and passwords for the client's networks so that our client's software applications can be remotely supported by us. I need to store the usernames and passwords in the database and provide the support consultants access to them so that they can login to the client's system and then provide support. Hope this is making sense. So the dilemma is that I don't want to store the usernames and passwords in cleartext on the database to ensure that if the DB was ever compromised, I am not then providing access to our client's networks to whomever gets the database. I have looked at two-way encryption of the passwords, but as they say, two-way is not much different to cleartext as if you can decrypt it, so can an attacker... eventually. The problem here is that I have setup a method to use a salt and a passcode that are stored in the application, I have used a salt that is stored in the db, but all have their weaknesses, ie if the app was reflected it exposes the salts etc. How can I secure the usernames and passwords in my database, and yet still provide the ability for my support consultants to view the information in the application so they can use it to login? This is obviously different to storing user's passwords as these are one way because I don't need to know what they are. But I do need to know what the client's remote access passwords are as we need to enter them in at the time of remoting to them. Anybody have some theories on what would be the best approach here? update The function I am trying to build is for our CRM application that will store the remote access details for the client. The CRM system provides call/issue tracking functionality and during the course of investigating the issue, the support consultant will need to remote in. They will then view the client's remote access details and make the connection

    Read the article

  • SQL version control methodology

    - by Tom H.
    There are several questions on SO about version control for SQL and lots of resources on the web, but I can't find something that quite covers what I'm trying to do. First off, I'm talking about a methodology here. I'm familiar with the various source control applications out there and I'm familiar with tools like Red Gate's SQL Compare, etc. and I know how to write an application to check things in and out of my source control system automatically. If there is a tool which would be particularly helpful in providing a whole new methodology or which have a useful and uncommon functionality then great, but for the tasks mentioned above I'm already set. The requirements that I'm trying to meet are: The database schema and look-up table data are versioned DML scripts for data fixes to larger tables are versioned A server can be promoted from version N to version N + X where X may not always be 1 Code isn't duplicated within the version control system - for example, if I add a column to a table I don't want to have to make sure that the change is in both a create script and an alter script The system needs to support multiple clients who are at various versions for the application (trying to get them all up to within 1 or 2 releases, but not there yet) Some organizations keep incremental change scripts in their version control and to get from version N to N + 3 you would have to run scripts for N-N+1 then N+1-N+2 then N+2-N+3. Some of these scripts can be repetitive (for example, a column is added but then later it is altered to change the data type). We're trying to avoid that repetitiveness since some of the client DBs can be very large, so these changes might take longer than necessary. Some organizations will simply keep a full database build script at each version level then use a tool like SQL Compare to bring a database up to one of those versions. The problem here is that intermixing DML scripts can be a problem. Imagine a scenario where I add a column, use a DML script to fill said column, then in a later version that column name is changed. Perhaps there is some hybrid solution? Maybe I'm just asking for too much? Any ideas or suggestions would be greatly appreciated though. If the moderators think that this would be more appropriate as a community wiki, please let me know. Thanks!

    Read the article

  • Image upload not functioning correctly

    - by PurpleSmurf
    I'm still trying to get to grips with PHP, and I'm trying to make a form that uploads a picture to a database. I don't have permissions for move_uploaded_file so I'm using the copy() as an alternative. Everywhere I've seen experiencing similar problems have all been with move_uploaded_file so I'm rather stuck. Trying to copy the image from the desktop doesn't seem to be working, it's not throwing up any more PHP errors but is displaying the error message for if something goes wrong. The form sends data to two tables in the database but I'm mainly concerned with the upload not working. There's over 200 lines so I'll post a snippet of the upload code, thank you in advance: function getExtension($str) { $i = strrpos($str,"."); if (!$i) { return ""; } $l = strlen($str) - $i; $ext = substr($str,$i+1,$l); return $ext; } //This variable is used as a flag. The value is initialized with 0 (meaning no error found) and it will be changed to 1 if an errro occures. If the error occures the file will not be uploaded. $errors=0; //checks if the form has been submitted if(isset($_POST['submitted'])) { //reads the name of the file the user submitted for uploading $image=$_FILES['image']['name']; //if it is not empty if ($image) { //get the original name of the file from the clients machine $filename = stripslashes($_FILES['image']['name']); //get the extension of the file in a lower case format $extension = getExtension($filename); $extension = strtolower($extension); //if it is unknown extension, class as an error and do not upload the file, otherwise continue if (($extension != "jpg") && ($extension != "jpeg") && ($extension != "png") && ($extension != "gif")) { //print error message echo '<h1>Unknown extension!</h1>'; $errors=1; } else { //get the size of the image in bytes //$_FILES['image']['tmp_name'] is the temporary filename of the file in which the uploaded file was stored on the server $size=filesize($_FILES['image']['tmp_name']); //give an unique name, for example the time in unix time format $image_name=time().'.'.$extension; //the new name will be containing the full path where will be stored (images folder) $newname="/home/k0929907/www/uploads/".$image_name; //verify if the image has been uploaded, and print error instead $copied = copy($_FILES['image']['tmp_name'], $newname); if (!$copied) { echo '<h1>Copy unsuccessfull!</h1>'; $errors=1; } } } }

    Read the article

  • Resource id #45 [on hold]

    - by user2916506
    What is this error? I am trying to connect from PHP to Traffic Live mysql and do a POST and at some id's I get this error. With the biggest part of contacts my code works just fine but it keeps getting this trouble maker. What can I do? Here is the trouble making code:if ($sales_datemodified[$h] $traffic_datemodified[$t]) { //if ($traffic_id[$t] != 22033) { $postrequestclient = $clients - post("staff/employee", null, '{"@class": "com.sohnar.trafficlite.transfer.trafficcompany.TrafficEmployeeTO", "id": ' . $traffic_id[$t] . ',"locationId":' . $traffic_locationid[$t] . ', "departmentId": ' . $traffic_departmentid[$t] . ', "ownerCompanyId": ' . $traffic_ownercompanyid[$t] . ',"userId": ' . $traffic_userid[$t] . ',"userName": "' . $traffic_username[$t] . '","employeeDetails": {"id": ' . $traffic_id[$t] . ',"jobTitle": "' . $sales_title[$h] . '","costPerHour": { "amountString": ' . $traffic_amountString[$t] . ', "currencyType": "' . $traffic_currencyType[$t] . '"},"hoursWorkedPerDayMinutes": ' . $traffic_hwpdm[$t] . ', "personalDetails": {"id": ' . $traffic_persdetId[$t] . ', "firstName": "' . $sales_firstname[$h] . '","middleName": "' . $traffic_middleName[$t] . '","lastName": "' . $sales_lastname[$h] . '", "emailAddress": "' . $traffic_username[$t] . '","workPhone": "' . $sales_phone[$h] . '","mobilePhone": "' . $sales_mobilephone[$h] . '" }}}') - setAuth(user, password); $postresponseclient = $postrequestclient - send() - json(); I don't have any errors in my code. That commented "if" is for excluding the trouble making contact on which I get this error. If I uncomment that the program runs fine without any problems. The problem is that this test is made on a small number of contacts and if I add more contacts to the test I get more errors of this kind. this should be a synchronization job so excluding all the "bad" contacts won't work

    Read the article

  • Multi-tenant ASP.NET MVC – Introduction

    - by zowens
    I’ve read a few different blogs that talk about multi-tenancy and how to resolve some of the issues surrounding multi-tenancy. What I’ve come to realize is that these implementations overcomplicate the issues and give only a muddy implementation! I’ve seen some really illogical code out there. I have recently been building a multi-tenancy framework for internal use at eagleenvision.net. Through this process, I’ve realized a few different techniques to make building multi-tenant applications actually quite easy. I will be posting a few different entries over the issue and my personal implementation. In this first post, I will discuss what multi-tenancy means and how my implementation will be structured.   So what’s the problem? Here’s the deal. Multi-tenancy is basically a technique of code-reuse of web application code. A multi-tenant application is an application that runs a single instance for multiple clients. Here the “client” is different URL bindings on IIS using ASP.NET MVC. The problem with different instances of the, essentially, same application is that you have to spin up different instances of ASP.NET. As the number of running instances of ASP.NET grows, so does the memory footprint of IIS. Stack Exchange shifted its architecture to multi-tenancy March. As the blog post explains, multi-tenancy saves cost in terms of memory utilization and physical disc storage. If you use the same code base for many applications, multi-tenancy just makes sense. You’ll reduce the amount of work it takes to synchronize the site implementations and you’ll thank your lucky stars later for choosing to use one application for multiple sites. Multi-tenancy allows the freedom of extensibility while relying on some pre-built code.   You’d think this would be simple. I have actually seen a real lack of reference material on the subject in terms of ASP.NET MVC. This is somewhat surprising given the number of users of ASP.NET MVC. However, I will certainly fill the void ;). Implementing a multi-tenant application takes a little thinking. It’s not straight-forward because the possibilities of implementation are endless. I have yet to see a great implementation of a multi-tenant MVC application. The only one that comes close to what I have in mind is Rob Ashton’s implementation (all the entries are listed on this page). There’s some really nasty code in there… something I’d really like to avoid. He has also written a library (MvcEx) that attempts to aid multi-tenant development. This code is even worse, in my honest opinion. Once I start seeing Reflection.Emit, I have to assume the worst :) In all seriousness, if his implementation makes sense to you, use it! It’s a fine implementation that should be given a look. At least look at the code. I will reference MvcEx going forward as a comparison to my implementation. I will explain why my approach differs from MvcEx and how it is better or worse (hopefully better).   Core Goals of my Multi-Tenant Implementation The first, and foremost, goal is to use Inversion of Control containers to my advantage. As you will see throughout this series, I pass around containers quite frequently and rely on their use heavily. I will be using StructureMap in my implementation. However, you could probably use your favorite IoC tool instead. <RANT> However, please don’t be stupid and abstract your IoC tool. Each IoC is powerful and by abstracting the capabilities, you’re doing yourself a real disservice. Who in the world swaps out IoC tools…? No one!</RANT> (It had to be said.) I will outline some of the goodness of StructureMap as we go along. This is really an invaluable tool in my tool belt and simple to use in my multi-tenant implementation. The second core goal is to represent a tenant as easily as possible. Just as a dependency container will be a first-class citizen, so will a tenant. This allows us to easily extend and use tenants. This will also allow different ways of “plugging in” tenants into your application. In my implementation, there will be a single dependency container for a single tenant. This will enable isolation of the dependencies of the tenant. The third goal is to use composition as a means to delegate “core” functions out to the tenant. More on this later.   Features In MvcExt, “Modules” are a code element of the infrastructure. I have simplified this concept and have named this “Features”. A feature is a simple element of an application. Controllers can be specified to have a feature and actions can have “sub features”. Each tenant can select features it needs and the other features will be hidden to the tenant’s users. My implementation doesn’t require something to be a feature. A controller can be common to all tenants. For example, (as you will see) I have a “Content” controller that will return the CSS, Javascript and Images for a tenant. This is common logic to all tenants and shouldn’t be hidden or considered a “feature”; Content is a core component.   Up next My next post will be all about the code. I will reveal some of the foundation to the way I do multi-tenancy. I will have posts dedicated to Foundation, Controllers, Views, Caching, Content and how to setup the tenants. Each post will be in-depth about the issues and implementation details, while adhering to my core goals outlined in this post. As always, comment with questions of DM me on twitter or send me an email.

    Read the article

  • TFS 2010 Build Custom Activity for Merging Assemblies

    - by Jakob Ehn
    *** The sample build process template discussed in this post is available for download from here: http://cid-ee034c9f620cd58d.office.live.com/self.aspx/BlogSamples/ILMerge.xaml ***   In my previous post I talked about library builds that we use to build and replicate dependencies between applications in TFS. This is typically used for common libraries and tools that several other application need to reference. When the libraries grow in size over time, so does the number of assemblies. So all solutions that uses the common library must reference all the necessary assemblies that they need, and if we for example do a refactoring and extract some code into a new assembly, all the clients must update their references to reflect these changes, otherwise it won’t compile. To improve on this, we use a tool from Microsoft Research called ILMerge (Download from here). It can be used to merge several assemblies into one assembly that contains all types. If you haven’t used this tool before, you should check it out. Previously I have implemented this in builds using a simple batch file that contains the full command, something like this: "%ProgramFiles(x86)%\microsoft\ilmerge\ilmerge.exe" /target:library /attr:ClassLibrary1.bl.dll /out:MyNewLibrary.dll ClassLibrary1.dll ClassLibrar2.dll ClassLibrary3.dll This merges 3 assemblies (ClassLibrary1, 2 and 3) into a new assembly called MyNewLibrary.dll. It will copy the attributes (file version, product version etc..) from ClassLibrary1.dll, using the /attr switch. For more info on ILMerge command line tool, see the above link. This approach works, but requires a little bit too much knowledge for the developers creating builds, therefor I have implemented a custom activity that wraps the use of ILMerge. This makes it much simpler to setup a new build definition and have the build automatically do the merging. The usage of the activity is then implemented as part of the Library Build process template mentioned in the previous post. For this article I have just created a simple build process template that only performs the ILMerge operation.   Below is the code for the custom activity. To make it compile, you need to reference the ILMerge.exe assembly. /// <summary> /// Activity for merging a list of assembies into one, using ILMerge /// </summary> public sealed class ILMergeActivity : BaseCodeActivity { /// <summary> /// A list of file paths to the assemblies that should be merged /// </summary> [RequiredArgument] public InArgument<IEnumerable<string>> InputAssemblies { get; set; } /// <summary> /// Full path to the generated assembly /// </summary> [RequiredArgument] public InArgument<string> OutputFile { get; set; } /// <summary> /// Which input assembly that the attibutes for the generated assembly should be copied from. /// Optional. If not specified, the first input assembly will be used /// </summary> public InArgument<string> AttributeFile { get; set; } /// <summary> /// Kind of assembly to generate, dll or exe /// </summary> public InArgument<TargetKindEnum> TargetKind { get; set; } // If your activity returns a value, derive from CodeActivity<TResult> // and return the value from the Execute method. protected override void Execute(CodeActivityContext context) { string message = InputAssemblies.Get(context).Aggregate("", (current, assembly) => current + (assembly + " ")); TrackMessage(context, "Merging " + message + " into " + OutputFile.Get(context)); ILMerge m = new ILMerge(); m.SetInputAssemblies(InputAssemblies.Get(context).ToArray()); m.TargetKind = TargetKind.Get(context) == TargetKindEnum.Dll ? ILMerge.Kind.Dll : ILMerge.Kind.Exe; m.OutputFile = OutputFile.Get(context); m.AttributeFile = !String.IsNullOrEmpty(AttributeFile.Get(context)) ? AttributeFile.Get(context) : InputAssemblies.Get(context).First(); m.SetTargetPlatform(RuntimeEnvironment.GetSystemVersion().Substring(0,2), RuntimeEnvironment.GetRuntimeDirectory()); m.Merge(); TrackMessage(context, "Generated " + m.OutputFile); } } [Browsable(true)] public enum TargetKindEnum { Dll, Exe } NB: The activity inherits from a BaseCodeActivity class which is an internal helper class which contains some methods and properties useful for moste custom activities. In this case, it uses the TrackeMessage method for writing to the build log. You either need to remove the TrackMessage method calls, or implement this yourself (which is not very hard… ) The custom activity has the following input arguments: InputAssemblies A list with the (full) paths to the assemblies to merge OutputFile The name of the resulting merged assembly AttributeFile Which assembly to use as the template for the attribute of the merged assembly. This argument is optional and if left blank, the first assembly in the input list is used TargetKind Decides what type of assembly to create, can be either a dll or an exe Of course, there are more switches to the ILMerge.exe, and these can be exposed as input arguments as well if you need it. To show how the custom activity can be used, I have attached a build process template (see link at the top of this post) that merges the output of the projects being built (CommonLibrary.dll and CommonLibrary2.dll) into a merged assembly (NewLibrary.dll). The build process template has the following custom process parameters:   The Assemblies To Merge argument is passed into a FindMatchingFiles activity to located all assemblies that are located in the BinariesDirectory folder after the compilation has been performed by Team Build. Here is the complete sequence of activities that performs the merge operation. It is located at the end of the Try, Compile, Test and Associate… sequence: It splits the AssembliesToMerge parameter and appends the full path (using the BinariesDirectory variable) and then enumerates the matching files using the FindMatchingFiles activity. When running the build, you can see that it merges two assemblies into a new one:     And the merged assembly (and associated pdb file) is copied to the drop location together with the rest of the assemblies:

    Read the article

  • Dependency Injection in ASP.NET MVC NerdDinner App using Ninject

    - by shiju
    In this post, I am applying Dependency Injection to the NerdDinner application using Ninject. The controllers of NerdDinner application have Dependency Injection enabled constructors. So we can apply Dependency Injection through constructor without change any existing code. A Dependency Injection framework injects the dependencies into a class when the dependencies are needed. Dependency Injection enables looser coupling between classes and their dependencies and provides better testability of an application and it removes the need for clients to know about their dependencies and how to create them. If you are not familiar with Dependency Injection and Inversion of Control (IoC), read Martin Fowler’s article Inversion of Control Containers and the Dependency Injection pattern. The Open Source Project NerDinner is a great resource for learning ASP.NET MVC.  A free eBook provides an end-to-end walkthrough of building NerdDinner.com application. The free eBook and the Open Source Nerddinner application are extremely useful if anyone is trying to lean ASP.NET MVC. The first release of  Nerddinner was as a sample for the first chapter of Professional ASP.NET MVC 1.0. Currently the application is updating to ASP.NET MVC 2 and you can get the latest source from the source code tab of Nerddinner at http://nerddinner.codeplex.com/SourceControl/list/changesets. I have taken the latest ASP.NET MVC 2 source code of the application and applied  Dependency Injection using Ninject and Ninject extension Ninject.Web.Mvc.Ninject &  Ninject.Web.MvcNinject is available at http://github.com/enkari/ninject and Ninject.Web.Mvc is available at http://github.com/enkari/ninject.web.mvcNinject is a lightweight and a great dependency injection framework for .NET.  Ninject is a great choice of dependency injection framework when building ASP.NET MVC applications. Ninject.Web.Mvc is an extension for ninject which providing integration with ASP.NET MVC.Controller constructors and dependencies of NerdDinner application Listing 1 – Constructor of DinnersController  public DinnersController(IDinnerRepository repository) {     dinnerRepository = repository; }  Listing 2 – Constrcutor of AccountControllerpublic AccountController(IFormsAuthentication formsAuth, IMembershipService service) {     FormsAuth = formsAuth ?? new FormsAuthenticationService();     MembershipService = service ?? new AccountMembershipService(); }  Listing 3 – Constructor of AccountMembership – Concrete class of IMembershipService public AccountMembershipService(MembershipProvider provider) {     _provider = provider ?? Membership.Provider; }    Dependencies of NerdDinnerDinnersController, RSVPController SearchController and ServicesController have a dependency with IDinnerRepositiry. The concrete implementation of IDinnerRepositiry is DinnerRepositiry. AccountController has dependencies with IFormsAuthentication and IMembershipService. The concrete implementation of IFormsAuthentication is FormsAuthenticationService and the concrete implementation of IMembershipService is AccountMembershipService. The AccountMembershipService has a dependency with ASP.NET Membership Provider. Dependency Injection in NerdDinner using NinjectThe below steps will configure Ninject to apply controller injection in NerdDinner application.Step 1 – Add reference for NinjectOpen the  NerdDinner application and add  reference to Ninject.dll and Ninject.Web.Mvc.dll. Both are available from http://github.com/enkari/ninject and http://github.com/enkari/ninject.web.mvcStep 2 – Extend HttpApplication with NinjectHttpApplication Ninject.Web.Mvc extension allows integration between the Ninject and ASP.NET MVC. For this, you have to extend your HttpApplication with NinjectHttpApplication. Open the Global.asax.cs and inherit your MVC application from  NinjectHttpApplication instead of HttpApplication.   public class MvcApplication : NinjectHttpApplication Then the Application_Start method should be replace with OnApplicationStarted method. Inside the OnApplicationStarted method, call the RegisterAllControllersIn() method.   protected override void OnApplicationStarted() {     AreaRegistration.RegisterAllAreas();     RegisterRoutes(RouteTable.Routes);     ViewEngines.Engines.Clear();     ViewEngines.Engines.Add(new MobileCapableWebFormViewEngine());     RegisterAllControllersIn(Assembly.GetExecutingAssembly()); }  The RegisterAllControllersIn method will enables to activating all controllers through Ninject in the assembly you have supplied .We are passing the current assembly as parameter for RegisterAllControllersIn() method. Now we can expose dependencies of controller constructors and properties to request injectionsStep 3 – Create Ninject ModulesWe can configure your dependency injection mapping information using Ninject Modules.Modules just need to implement the INinjectModule interface, but most should extend the NinjectModule class for simplicity. internal class ServiceModule : NinjectModule {     public override void Load()     {                    Bind<IFormsAuthentication>().To<FormsAuthenticationService>();         Bind<IMembershipService>().To<AccountMembershipService>();                  Bind<MembershipProvider>().ToConstant(Membership.Provider);         Bind<IDinnerRepository>().To<DinnerRepository>();     } } The above Binding inforamtion specified in the Load method tells the Ninject container that, to inject instance of DinnerRepositiry when there is a request for IDinnerRepositiry and  inject instance of FormsAuthenticationService when there is a request for IFormsAuthentication and inject instance of AccountMembershipService when there is a request for IMembershipService. The AccountMembershipService class has a dependency with ASP.NET Membership provider. So we configure that inject the instance of Membership Provider. When configuring the binding information, you can specify the object scope in you application.There are four built-in scopes available in Ninject:Transient  -  A new instance of the type will be created each time one is requested. (This is the default scope). Binding method is .InTransientScope()   Singleton - Only a single instance of the type will be created, and the same instance will be returned for each subsequent request. Binding method is .InSingletonScope()Thread -  One instance of the type will be created per thread. Binding method is .InThreadScope() Request -  One instance of the type will be created per web request, and will be destroyed when the request ends. Binding method is .InRequestScope() Step 4 – Configure the Ninject KernelOnce you create NinjectModule, you load them into a container called the kernel. To request an instance of a type from Ninject, you call the Get() extension method. We can configure the kernel, through the CreateKernel method in the Global.asax.cs. protected override IKernel CreateKernel() {     var modules = new INinjectModule[]     {         new ServiceModule()     };       return new StandardKernel(modules); } Here we are loading the Ninject Module (ServiceModule class created in the step 3)  onto the container called the kernel for performing dependency injection.Source CodeYou can download the source code from http://nerddinneraddons.codeplex.com. I just put the modified source code onto CodePlex repository. The repository will update with more add-ons for the NerdDinner application.

    Read the article

  • Interesting things – Twitter annotations and your phone as a web server

    - by jamiet
    I overheard/read a couple of things today that really made me, data junkie that I am, take a step back and think, “Hmmm, yeah, that could be really interesting” and I wanted to make a note of them here so that (a) I could bring them to the attention of anyone that happens to read this and (b) I can maybe come back here in a few years and see if either of these have come to fruition. Your phone as a web server While listening to Jon Udell’s (twitter) “Interviews with Innovators Podcast” today in which he interviewed Herbert Van de Sompel (twitter) about his Momento project. During the interview Jon and Herbert made the following remarks: Jon: [some people] really had this vision of a web of servers, the notion that every node on the internet, every connected entity, is potentially a server and a client…we can see where we’re getting to a point where these endpoint devices we have in our pockets are going to be massively capable and it may be in the not too distant future that significant chunks of the web archive will be cached all over the place including on your own machine… Herbert: wasn’t it Opera who at one point turned your browser into a server? That really got my brain ticking. We all carry a mobile phone with us and therefore we all potentially carry a mobile web server with us as well and to my mind the only thing really stopping that from happening is the capabilities of the phone hardware, the capabilities of the network infrastructure and the will to just bloody do it. Certainly all the standards required for addressing a web server on a phone already exist (to this uninitiated observer DNS and IPv6 seem to solve that problem) so why not? I tweeted about the idea and Rory Street answered back with “why would you want a phone to be a web server?”: Its a fair question and one that I would like to try and answer. Mobile phones are increasingly becoming our window onto the world as we use them to upload messages to Twitter, record our location on FourSquare or interact with our friends on Facebook but in each of these cases some other service is acting as our intermediary; to see what I’m thinking you have to go via Twitter, to see where I am you have to go to FourSquare (I’m using ‘I’ liberally, I don’t actually use FourSquare before you ask). Why should this have to be the case? Why can’t that data be decentralised? Why can’t we be masters of our own data universe? If my phone acted as a web server then I could expose all of that information without needing those intermediary services. I see a time when we can pass around URLs such as the following: http://jamiesphone.net/location/current - Where is Jamie right now? http://jamiesphone.net/location/2010-04-21 – Where was Jamie on 21st April 2010? http://jamiesphone.net/thoughts/current – What’s on Jamie’s mind right now? http://jamiesphone.net/blog – What documents is Jamie sharing with me? http://jamiesphone.net/calendar/next7days – Where is Jamie planning to be over the next 7 days? and those URLs get served off of the phone in our pockets. If we govern that data then we can control who has access to it and (crucially) how long its available for. Want to wipe yourself off the face of the web? its pretty easy if you’re in control of all the data – just turn your phone off. None of this exists today but I look forward to a time when it does. Opera really were onto something last June when they announced Opera Unite (admittedly Unite only works because Opera provide an intermediary DNS-alike system – it isn’t totally decentralised). Opening up Twitter annotations Last week Twitter held their first developer conference called Chirp where they announced an upcoming new feature called ‘Twitter Annotations’; in short this will allow us to attach metadata to a Tweet thus enhancing the tweet itself. Think of it as a richer version of hashtags. To think of it another way Twitter are turning their data into a humongous Entity-Attribute-Value or triple-tuple store. That alone has huge implications both for the web and Twitter as a whole – the ability to enrich that 140 characters data and thus make it more useful is indeed compelling however today I stumbled upon a blog post from Eugene Mandel entitled Tweet Annotations – a Way to a Metadata Marketplace? where he proposed the idea of allowing tweets to have metadata added by people other than the person who tweeted the original tweet. This idea really fascinated me especially when I read some of the potential uses that Eugene and his commenters suggested. They included: Amazon could attach an ISBN to a tweet that mentions a book. Specialist clients apps for book lovers could be built up around this metadata. Advertisers could pay to place adverts in metadata. The revenue generated from those adverts could be shared with the tweeter or people who add the metadata. Granted, allowing anyone to add metadata to a tweet has the potential to create a spam problem the like of which we haven’t even envisaged but spam hasn’t halted the growth of the web and neither should it halt the growth of data annotations either. The original tweeter should of course be able to determine who can add metadata and whether it should be moderated. As Eugene says himself: Opening publishing tweet annotations to anyone will open the way to a marketplace of metadata where client developers, data mining companies and advertisers can add new meaning to Twitter and build innovative businesses. What Eugene and his followers did not mention is what I think is potentially the most fascinating use of opening up annotations. Google’s success today is built on their page rank algorithm that measures the validity of a web page by the number of incoming links to it and the page rank of the sites containing those links – its a system built on reputation. Twitter annotations could open up a new paradigm however – let’s call it People rank- where reputation can be measured by the metadata that people choose to apply to links and the websites containing those links. Its not hard to see why Google and Microsoft have paid big bucks to get access to the Twitter firehose! Neither of these features, phones as a web server or the ability to add annotations to other people’s tweets, exist today but I strongly believe that they could dramatically enhance the web as we know it today. I hope to look back on this blog post in a few years in the knowledge that these ideas have been put into place. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >