Search Results

Search found 1117 results on 45 pages for 'craftsman don'.

Page 32/45 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • PASS Summit Location follow up - result analysis

    - by simonsabin
    I've had a chance to look at the results directly and it is clear that there is a tough choice. On the one hand people are saying that they prefer to have PASS put their money into chapters and things like 24hrs of PASS rather than an event on the east coast. Whilst at the same time almost 50% more people said they would be more likely to attend an East Coast event than a Seattle event, and 60% more said they would be more likley to attend a US Central region event. Whats more 60% said that the summit should be outside of Seattle every other year with only 19% saying it should always stay in Seattle. So clearly there is a huge desire for a non Seattle event. Looking at the other reasons for keeping in Seattle and the big one being that people want Microsoft speakers. More people think its somewhat important of very important that the conference is in walking distance of the hotels and restaurants. Essentially the Q6 questions show an even balance for normal conference, highlighting that they are prepared to travel, not with the family and they want a well laid out conference. Whats very annoying is that the questions, as people have commented, were biased towards certain answers. For instance there was no option about whether people feel its important to have industry leading speakers, MVPs etc at the conference. Only questions about Microsoft speakers. I know survey writing is very difficult to avoid biasing the answers one way or another. There was also no choice to show peoples preference, would people prefer Microsoft speakers or the summit to be held on the East Coast/Central US. I also find it amazing that people prefer hundres of developers rather than the SQLCAT and CSS teams, surely that indicates another issue about a lack of understanding of what the these teams do. All in all it is clear that people showed they want an event outside of Seattle and don't want PASS to be putting money into that instead of into other community activites. I find it suprising that there appears to have been a huge weighting against certain questions which have prioritised them over the huge desire for a PASS summit outside of Seattle. Lets see where we will be in 2013 or maybe they will rethink 2012 who knows.

    Read the article

  • Using the @ in SQL Azure Connections

    - by BuckWoody
    The other day I was working with a client on an application they were changing to a hybrid architecture – some data on-premise and other data in SQL Azure and Windows Azure Blob storage. I had them make a couple of corrections - the first was that all communications to SQL Azure need to be encrypted. It’s a simple addition to the connection string, depending on the library you use. Which brought up another interesting point. They had been using something that looked like this, using the .NET provider: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=LoginName;Password=myPassword; Trusted_Connection=False;Encrypt=True; This includes most of the formatting needed for SQL Azure. It specifies TCP as the transport mechanism, the database name is included, Trusted_Connection is off, and encryption is on. But it needed one more change: Server=tcp:[serverName].database.windows.net;Database=myDataBase; User ID=[LoginName]@[serverName];Password=myPassword; Trusted_Connection=False;Encrypt=True; Notice the difference? It’s the User ID parameter. It includes the @ symbol and the name of the server – not the whole DNS name, just the server name itself. The developers were a bit surprised, since it had been working with the first format that just used the user name. Why did both work, and why is one better than the other? It has to do with the connection library you use. For most libraries, the user name is enough. But for some libraries (subject to change so I don’t list them here) the server name parameter isn’t sent in the way the load balancer understands, so you need to include the server name right in the login, so the system can parse it correctly. Keep in mind, the string limit for that is 128 characters – so take the @ symbol and the server name into consideration for user names. The user connection info is detailed here: http://msdn.microsoft.com/en-us/library/ee336268.aspx Upshot? Include the @servername on your connection string just to be safe. And plan for that extra space…  

    Read the article

  • How to Send the Contents of the Clipboard to a Text File via the Send to Menu

    - by Jason Faulkner
    We have previously covered how to send the contents of a text file to the Windows Clipboard with a simple Send To shortcut, but what if you want to do the opposite? That is: send the contents of the clipboard to a text file with a simple shortcut. No problem. Here’s how. Copy the ClipOut Utility While Windows offers the command line tool ‘clip’ as a way to direct console output to the clipboard, it does not have a tool to direct the clipboard contents to the console. To do this, we are going to use a small utility named ClipOut (download link at the bottom). Simply download and extract this file to a location in your Windows PATH variable (if you don’t know what this means, just extract the EXE to your C:\Windows folder) and you are ready to go. Add the Send To Shortcut Open your Send To folder location by going to Run > shell:sendto Create a new shortcut with the command: CMD /C ClipOut > Note the above command will overwrite the contents of the selected file. If you would like to append to the contents of the selected file, use this command instead: CMD /C ClipOut >> Of course, you could make shortcuts for both. Give a descriptive name to the shortcut. You’re finished. Using this shortcut will now send the text contents copied to your Windows Clipboard to the selected file. It is important to note that the ClipOut tool only supports outputting text. If you had binary data copied to your clipboard, then the output would be empty. Changing the Icon By default, the icon for the shortcut will appear as a command prompt, but you can easily change this by editing the properties of the shortcut and clicking the Change Icon button. We used an icon located in “%SystemRoot%\System32\shell32.dll”, but any icon of your liking will do. As an additional tweak, you can set the properties of the shortcut to run minimized. This will prevent the command window from “blinking” when the send to command is run (instead it will blink in your taskbar, which is hardly noticeable). Links Download ClipOut Utility     

    Read the article

  • Compiling realtime kernel from RHEL 6 MRG sources on CentOS 6

    - by Sashka B
    I'm trying to compile kernel-rt-2.6.33.9-rt31.75.el6rt.src.rpm from RHEL6 MRG source RPMs on Centos 6 x86_64 system. It's first time I'm doing this, so I did research on how to do this properly. From what I found, I did: rpm -ihv kernel-rt-2.6.33.9-rt31.75.el6rt.src.rpm cd ~/rpmbuild/SPECS nano kernel-rt.spec rpmbuild -bb kernel-rt.spec 2> build-err.log | tee build-out.log in kernel-rt.spec I've disbleed compilation of variants I don' need - ie compile only rt and firmware. Also defined not to build debuginfo. After compilation finished, I've got in ~/rpmbuild/RPMS/x86_64/ two files: kernel-rt-2.6.33.9-rt31.75.el6rt.x86_64.rpm kernel-rt-devel-2.6.33.9-rt31.75.el6rt.x86_64.rpm but when I tried to install kernel, I got error message: $ sudo rpm -ihv kernel-rt-2.6.33.9-rt31.75.el6rt.x86_64.rpm error: Failed dependencies: kernel-rt-firmware = 2.6.33.9-rt31.75.el6rt is needed by kernel-rt-2.6.33.9-rt31.75.el6rt.x86_64 There was no folder ~/rpmbuild/RPMS/noarch - where I would expect it to show up. Also, I've tried rpmbuild --rebuild kernel-rt-2.6.33.9-rt31.75.el6rt.src.rpm, but got same results... What am I doing wrong? I've seen this question, but it suggests what I tried already and I want to build kernel myself, not use pre-built from SLC.

    Read the article

  • How to install subversion on 1&1 server with windows?

    - by Miles M.
    I would like to start using Unfuddle for my project on 1&1 server. I never used subversion and core control before. So, I read a lot of documentation about it but each time, I get lost at the very beginning : I've downloaded the latest version of subversion. But on every tutorial, the way to follow is different. First I sae, on a lot of tuts, that you have to enter command lines. Is that ONLY for Linux ? Like here : http://chwalisz.org/2007/08/05/subversion-on-11-shared-hosting/ I also find something completely different on some website, I think (correct me if I'm wrong) it is the Windows tuts, deeply different frm the linu one. So I found that : http://www.codinghorror.com/blog/2008/04/setting-up-subversion-on-windows.html http://geekswithblogs.net/emanish/archive/2006/06/14/81905.aspx http://better-scm.shlomifish.org/subversion/Svn-Win32-Inst-Guide.html And I don t understand : Do I still have to put the sibversion file on the server ? Do I have to install Apach ? where, on my computer or on my server ? I'm working ith WampServer so I thing I have already Apach installed right ? When they say it is for Windows, do they mean it is for windows servers or for your own OS ? 'Cause my servers are on linux. How could I install Subversion on a 1&1 linux server from my W7 OS computer ? Thanks, that's a lot of question but that realle messy in my mind, I can't find something clear ..

    Read the article

  • Error - "IR Hardware not detected" - but it's installed/working

    - by Robert
    I am trying to do: Settings-TV-Set up TV signal. During this process I am getting the error "IR Hardware not detected." With the remote, I can select the "try again" button (to re-detect) and it tries again, so the remote works. Plugging in the "IR blaster" doesn't change anything. (I wouldn't expect any difference, but I read a post which said you needed that. I will get Media Center to change channels if I can get that working - but first things first.) I was able to do the setup months ago when I had cable. and everything was fine. I just got DirecTV. (BTW - During the above process, Media Center detects the signal coming in on channel 3. Windows XP Media Center SP3. The TV Tuner card is a Pinnacle TCTV HD PCI. Everything - and I mean everything - has the latest firmware and drivers - as of 4 months ago when I fixed a different problem. So I DON"T WANT TO HEAR the standard answer to check drivers/firmware. THANK YOU.) Thanks for any help.

    Read the article

  • how to store data in ram in verilog

    - by anum
    i am having a bit stream of 128 bits @ each posedge of clk,i.e.total 10 bit streams each of length 128 bits. i want to divide the 128 bit stream into 8, 8 bits n hve to store them in a ram / memory of width 8 bits. i did it by assigning 8, 8 bits to wires of size 8 bit.in this way there are 16 wires. and i am using dual port ram...wen i cal module of memory in stimulus.i don know how to give input....as i am hving 16 different wires naming from k1 to k16. **codeeee** // this is stimulus file module final_stim; reg [7:0] in,in_data; reg clk,rst_n,rd,wr,rd_data,wr_data; wire [7:0] out,out_wr, ouut; wire[7:0] d; integer i; //wire[7:0] xor_out; reg kld,f; reg [127:0]key; wire [127:0] key_expand; wire [7:0]out_data; reg [7:0] k; //wire [7:0] k1,k2,k3,k4,k5,k6,k7,k8,k9,k10,k11,k12,k13,k14,k15,k16; wire [7:0] out_data1; **//key_expand is da output which is giving 10 streams of size 128 bits.** assign k1=key_expand[127:120]; assign k2=key_expand[119:112]; assign k3=key_expand[111:104]; assign k4=key_expand[103:96]; assign k5=key_expand[95:88]; assign k6=key_expand[87:80]; assign k7=key_expand[79:72]; assign k8=key_expand[71:64]; assign k9=key_expand[63:56]; assign k10=key_expand[55:48]; assign k11=key_expand[47:40]; assign k12=key_expand[39:32]; assign k13=key_expand[31:24]; assign k14=key_expand[23:16]; assign k15=key_expand[15:8]; assign k16=key_expand[7:0]; **// then the module of memory is instanciated. //here k1 is sent as input.but i don know how to save the other values of k. //i tried to use for loop but it dint help** memory m1(clk,rst_n,rd, wr,k1,out_data1); aes_sbox b(out,d); initial begin clk=1'b1; rst_n=1'b0; #20 rst_n = 1; //rd=1'b1; wr_data=1'b1; in=8'hd4; #20 //rst_n=1'b1; in=8'h27; rd_data=1'b0; wr_data=1'b1; #20 in=8'h11; rd_data=1'b0; wr_data=1'b1; #20 in=8'hae; rd_data=1'b0; wr_data=1'b1; #20 in=8'he0; rd_data=1'b0; wr_data=1'b1; #20 in=8'hbf; rd_data=1'b0; wr_data=1'b1; #20 in=8'h98; rd_data=1'b0; wr_data=1'b1; #20 in=8'hf1; rd_data=1'b0; wr_data=1'b1; #20 in=8'hb8; rd_data=1'b0; wr_data=1'b1; #20 in=8'hb4; rd_data=1'b0; wr_data=1'b1; #20 in=8'h5d; rd_data=1'b0; wr_data=1'b1; #20 in=8'he5; rd_data=1'b0; wr_data=1'b1; #20 in=8'h1e; rd_data=1'b0; wr_data=1'b1; #20 in=8'h41; rd_data=1'b0; wr_data=1'b1; #20 in=8'h52; rd_data=1'b0; wr_data=1'b1; #20 in=8'h30; rd_data=1'b0; wr_data=1'b1; #20 wr_data=1'b0; #380 rd_data=1'b1; #320 rd_data = 1'b0; /////////////// #10 kld = 1'b1; key=128'h 2b7e151628aed2a6abf7158809cf4f3c; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b0; #10 wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 kld = 1'b0; key = 128'h 2b7e151628aed2a6abf7158809cf4f3c; wr = 1'b1; rd = 1'b1; #20 wr = 1'b0; #20 rd = 1'b1; #4880 f=1'b1; ///////////////////////////////////////////////// // out_data[i] end /*always@(*) begin while(i) mem[i]^mem1[i] ; i<=16; break; end*/ always #10 clk=~clk; always@(posedge clk) begin //$monitor($time," out_wr=%h,out_rd=%h\n ",out_wr,out); #10000 $stop; end endmodule

    Read the article

  • Follow through - How to setup equivalent USVIDEO.ORG DNS-Proxy on Linux

    - by DNSDC
    I'm quite keen to setup similar service (but FREE) and seems you know how to do this. "you need to run your own private dns with artificial records for example pandora.com you also need a real dns to fall back on. now that all requests for these sites are going to your US located box you can open up port 80 on squid and listen for the traffic. your cache_peer settings should allow you to map each domain to their real ip. The trafic now flows initially from your US located box to the service but then the server responds it responds directly to the host. no magic here. I won't share the fine details as it probably best serves all to not over exploit this." Did you mean we need to 1. Setup Forward-only DNS on a US-based server/ip? 2. Setup cache_peer and cache_peer_domain in Squid, I got this. 3. Any iptables rule, prerouting, postrouting rules needed to accomplish this? Appreciate your expert advice. Cheers, Don

    Read the article

  • How to reduce memory consumption an AWS EC2 t1.micro instance (free tier) ubuntu server 14.04 LTS EBS

    - by CMPSoares
    Hi I'm working on my bachelor thesis and for that I need to host a node.js web application on AWS, in order to avoid costs I'm using a t1.micro instance with 30GB disk space (from what I know it's the maximum I get in the free tier) which is barely used. But instead I have problems with memory consumption, it's using all of it. I tried the approach of creating a virtual swap area as mentioned at Why don't EC2 ubuntu images have swap? with these commands: sudo dd if=/dev/zero of=/var/swapfile bs=1M count=2048 && sudo chmod 600 /var/swapfile && sudo mkswap /var/swapfile && echo /var/swapfile none swap defaults 0 0 | sudo tee -a /etc/fstab && sudo swapon -a But this swap area isn't used somehow. Is something missing in this approach or is there another process of reducing the memory consumption in these type of AWS instances? Bottom-line: This originates server freezes and crashes and that's what I want to stop either by using the swap, reducing memory usage or both.

    Read the article

  • Can't get a display albums function to work... php [closed]

    - by Zhenia
    need your help with the code, please. I am trying to display an album from the database, but i just get some strange signs having no idea why... the signs are Albums Help me out if you know how to solve this problem. how can i maKe the code to display name of the album and amount of images in it? i am sure all is fine with the db. this is the function: function get_albums() { $albums = array(); // not always have got to put brackets $albums_query = "SELECT albums.album_id, albums.timestamp, albums.name, LEFT(albums.description, 50) as description, COUNT(images.image_id) as image_count FROM albums LEFT JOIN images ON albums.album_id = images.album_id WHERE albums.user_id = '{$_SESSION['user_id']}' GROUP BY albums.album_id"; $res = mysql_query($albums_query) or die(mysql_error().'<br>'.$albums_query); while($albums_row = mysql_fetch_assoc($res)){ $albums = array ( 'id' => $albums_row['album_id'], 'timestamp'=> $albums_row['timestamp'], 'name' => $albums_row['name'], 'description' => $albums_row['description'], 'count' =>$albums_row['image_count'] ); } return $albums; } and the other half of the code: <?php $albums = get_albums(); if(empty($albums)) { echo'<p>You don\'t have any albums</p>'; }else{ foreach($albums as $album){ echo'<p><a href="view_album.php?album_id=',$album['id'],'">',$album['name'],'</a>(',$album['count'],'images)<br /></p>'; } } ?>

    Read the article

  • AngularJS on top of ASP.NET: Moving the MVC framework out to the browser

    - by Varun Chatterji
    Heavily drawing inspiration from Ruby on Rails, MVC4’s convention over configuration model of development soon became the Holy Grail of .NET web development. The MVC model brought with it the goodness of proper separation of concerns between business logic, data, and the presentation logic. However, the MVC paradigm, was still one in which server side .NET code could be mixed with presentation code. The Razor templating engine, though cleaner than its predecessors, still encouraged and allowed you to mix .NET server side code with presentation logic. Thus, for example, if the developer required a certain <div> tag to be shown if a particular variable ShowDiv was true in the View’s model, the code could look like the following: Fig 1: To show a div or not. Server side .NET code is used in the View Mixing .NET code with HTML in views can soon get very messy. Wouldn’t it be nice if the presentation layer (HTML) could be pure HTML? Also, in the ASP.NET MVC model, some of the business logic invariably resides in the controller. It is tempting to use an anti­pattern like the one shown above to control whether a div should be shown or not. However, best practice would indicate that the Controller should not be aware of the div. The ShowDiv variable in the model should not exist. A controller should ideally, only be used to do the plumbing of getting the data populated in the model and nothing else. The view (ideally pure HTML) should render the presentation layer based on the model. In this article we will see how Angular JS, a new JavaScript framework by Google can be used effectively to build web applications where: 1. Views are pure HTML 2. Controllers (in the server sense) are pure REST based API calls 3. The presentation layer is loaded as needed from partial HTML only files. What is MVVM? MVVM short for Model View View Model is a new paradigm in web development. In this paradigm, the Model and View stuff exists on the client side through javascript instead of being processed on the server through postbacks. These frameworks are JavaScript frameworks that facilitate the clear separation of the “frontend” or the data rendering logic from the “backend” which is typically just a REST based API that loads and processes data through a resource model. The frameworks are called MVVM as a change to the Model (through javascript) gets reflected in the view immediately i.e. Model > View. Also, a change on the view (through manual input) gets reflected in the model immediately i.e. View > Model. The following figure shows this conceptually (comments are shown in red): Fig 2: Demonstration of MVVM in action In Fig 2, two text boxes are bound to the same variable model.myInt. Thus, changing the view manually (changing one text box through keyboard input) also changes the other textbox in real time demonstrating V > M property of a MVVM framework. Furthermore, clicking the button adds 1 to the value of model.myInt thus changing the model through JavaScript. This immediately updates the view (the value in the two textboxes) thus demonstrating the M > V property of a MVVM framework. Thus we see that the model in a MVVM JavaScript framework can be regarded as “the single source of truth“. This is an important concept. Angular is one such MVVM framework. We shall use it to build a simple app that sends SMS messages to a particular number. Application, Routes, Views, Controllers, Scope and Models Angular can be used in many ways to construct web applications. For this article, we shall only focus on building Single Page Applications (SPAs). Many of the approaches we will follow in this article have alternatives. It is beyond the scope of this article to explain every nuance in detail but we shall try to touch upon the basic concepts and end up with a working application that can be used to send SMS messages using Sent.ly Plus (a service that is itself built using Angular). Before you read on, we would like to urge you to forget what you know about Models, Views, Controllers and Routes in the ASP.NET MVC4 framework. All these words have different meanings in the Angular world. Whenever these words are used in this article, they will refer to Angular concepts and not ASP.NET MVC4 concepts. The following figure shows the skeleton of the root page of an SPA: Fig 3: The skeleton of a SPA The skeleton of the application is based on the Bootstrap starter template which can be found at: http://getbootstrap.com/examples/starter­template/ Apart from loading the Angular, jQuery and Bootstrap JavaScript libraries, it also loads our custom scripts /app/js/controllers.js /app/js/app.js These scripts define the routes, views and controllers which we shall come to in a moment. Application Notice that the body tag (Fig. 3) has an extra attribute: ng­app=”smsApp” Providing this tag “bootstraps” our single page application. It tells Angular to load a “module” called smsApp. This “module” is defined /app/js/app.js angular.module('smsApp', ['smsApp.controllers', function () {}]) Fig 4: The definition of our application module The line shows above, declares a module called smsApp. It also declares that this module “depends” on another module called “smsApp.controllers”. The smsApp.controllers module will contain all the controllers for our SPA. Routing and Views Notice that in the Navbar (in Fig 3) we have included two hyperlinks to: “#/app” “#/help” This is how Angular handles routing. Since the URLs start with “#”, they are actually just bookmarks (and not server side resources). However, our route definition (in /app/js/app.js) gives these URLs a special meaning within the Angular framework. angular.module('smsApp', ['smsApp.controllers', function () { }]) //Configure the routes .config(['$routeProvider', function ($routeProvider) { $routeProvider.when('/binding', { templateUrl: '/app/partials/bindingexample.html', controller: 'BindingController' }); }]); Fig 5: The definition of a route with an associated partial view and controller As we can see from the previous code sample, we are using the $routeProvider object in the configuration of our smsApp module. Notice how the code “asks for” the $routeProvider object by specifying it as a dependency in the [] braces and then defining a function that accepts it as a parameter. This is known as dependency injection. Please refer to the following link if you want to delve into this topic: http://docs.angularjs.org/guide/di What the above code snippet is doing is that it is telling Angular that when the URL is “#/binding”, then it should load the HTML snippet (“partial view”) found at /app/partials/bindingexample.html. Also, for this URL, Angular should load the controller called “BindingController”. We have also marked the div with the class “container” (in Fig 3) with the ng­view attribute. This attribute tells Angular that views (partial HTML pages) defined in the routes will be loaded within this div. You can see that the Angular JavaScript framework, unlike many other frameworks, works purely by extending HTML tags and attributes. It also allows you to extend HTML with your own tags and attributes (through directives) if you so desire, you can find out more about directives at the following URL: http://www.codeproject.com/Articles/607873/Extending­HTML­with­AngularJS­Directives Controllers and Models We have seen how we define what views and controllers should be loaded for a particular route. Let us now consider how controllers are defined. Our controllers are defined in the file /app/js/controllers.js. The following snippet shows the definition of the “BindingController” which is loaded when we hit the URL http://localhost:port/index.html#/binding (as we have defined in the route earlier as shown in Fig 5). Remember that we had defined that our application module “smsApp” depends on the “smsApp.controllers” module (see Fig 4). The code snippet below shows how the “BindingController” defined in the route shown in Fig 5 is defined in the module smsApp.controllers: angular.module('smsApp.controllers', [function () { }]) .controller('BindingController', ['$scope', function ($scope) { $scope.model = {}; $scope.model.myInt = 6; $scope.addOne = function () { $scope.model.myInt++; } }]); Fig 6: The definition of a controller in the “smsApp.controllers” module. The pieces are falling in place! Remember Fig.2? That was the code of a partial view that was loaded within the container div of the skeleton SPA shown in Fig 3. The route definition shown in Fig 5 also defined that the controller called “BindingController” (shown in Fig 6.) was loaded when we loaded the URL: http://localhost:22544/index.html#/binding The button in Fig 2 was marked with the attribute ng­click=”addOne()” which added 1 to the value of model.myInt. In Fig 6, we can see that this function is actually defined in the “BindingController”. Scope We can see from Fig 6, that in the definition of “BindingController”, we defined a dependency on $scope and then, as usual, defined a function which “asks for” $scope as per the dependency injection pattern. So what is $scope? Any guesses? As you might have guessed a scope is a particular “address space” where variables and functions may be defined. This has a similar meaning to scope in a programming language like C#. Model: The Scope is not the Model It is tempting to assign variables in the scope directly. For example, we could have defined myInt as $scope.myInt = 6 in Fig 6 instead of $scope.model.myInt = 6. The reason why this is a bad idea is that scope in hierarchical in Angular. Thus if we were to define a controller which was defined within the another controller (nested controllers), then the inner controller would inherit the scope of the parent controller. This inheritance would follow JavaScript prototypal inheritance. Let’s say the parent controller defined a variable through $scope.myInt = 6. The child controller would inherit the scope through java prototypical inheritance. This basically means that the child scope has a variable myInt that points to the parent scopes myInt variable. Now if we assigned the value of myInt in the parent, the child scope would be updated with the same value as the child scope’s myInt variable points to the parent scope’s myInt variable. However, if we were to assign the value of the myInt variable in the child scope, then the link of that variable to the parent scope would be broken as the variable myInt in the child scope now points to the value 6 and not to the parent scope’s myInt variable. But, if we defined a variable model in the parent scope, then the child scope will also have a variable model that points to the model variable in the parent scope. Updating the value of $scope.model.myInt in the parent scope would change the model variable in the child scope too as the variable is pointed to the model variable in the parent scope. Now changing the value of $scope.model.myInt in the child scope would ALSO change the value in the parent scope. This is because the model reference in the child scope is pointed to the scope variable in the parent. We did no new assignment to the model variable in the child scope. We only changed an attribute of the model variable. Since the model variable (in the child scope) points to the model variable in the parent scope, we have successfully changed the value of myInt in the parent scope. Thus the value of $scope.model.myInt in the parent scope becomes the “single source of truth“. This is a tricky concept, thus it is considered good practice to NOT use scope inheritance. More info on prototypal inheritance in Angular can be found in the “JavaScript Prototypal Inheritance” section at the following URL: https://github.com/angular/angular.js/wiki/Understanding­Scopes. Building It: An Angular JS application using a .NET Web API Backend Now that we have a perspective on the basic components of an MVVM application built using Angular, let’s build something useful. We will build an application that can be used to send out SMS messages to a given phone number. The following diagram describes the architecture of the application we are going to build: Fig 7: Broad application architecture We are going to add an HTML Partial to our project. This partial will contain the form fields that will accept the phone number and message that needs to be sent as an SMS. It will also display all the messages that have previously been sent. All the executable code that is run on the occurrence of events (button clicks etc.) in the view resides in the controller. The controller interacts with the ASP.NET WebAPI to get a history of SMS messages, add a message etc. through a REST based API. For the purposes of simplicity, we will use an in memory data structure for the purposes of creating this application. Thus, the tasks ahead of us are: Creating the REST WebApi with GET, PUT, POST, DELETE methods. Creating the SmsView.html partial Creating the SmsController controller with methods that are called from the SmsView.html partial Add a new route that loads the controller and the partial. 1. Creating the REST WebAPI This is a simple task that should be quite straightforward to any .NET developer. The following listing shows our ApiController: public class SmsMessage { public string to { get; set; } public string message { get; set; } } public class SmsResource : SmsMessage { public int smsId { get; set; } } public class SmsResourceController : ApiController { public static Dictionary<int, SmsResource> messages = new Dictionary<int, SmsResource>(); public static int currentId = 0; // GET api/<controller> public List<SmsResource> Get() { List<SmsResource> result = new List<SmsResource>(); foreach (int key in messages.Keys) { result.Add(messages[key]); } return result; } // GET api/<controller>/5 public SmsResource Get(int id) { if (messages.ContainsKey(id)) return messages[id]; return null; } // POST api/<controller> public List<SmsResource> Post([FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { SmsResource res = (SmsResource) value; res.smsId = currentId++; messages.Add(res.smsId, res); //SentlyPlusSmsSender.SendMessage(value.to, value.message); return Get(); } } // PUT api/<controller>/5 public List<SmsResource> Put(int id, [FromBody] SmsMessage value) { //Synchronize on messages so we don't have id collisions lock (messages) { if (messages.ContainsKey(id)) { //Update the message messages[id].message = value.message; messages[id].to = value.message; } return Get(); } } // DELETE api/<controller>/5 public List<SmsResource> Delete(int id) { if (messages.ContainsKey(id)) { messages.Remove(id); } return Get(); } } Once this class is defined, we should be able to access the WebAPI by a simple GET request using the browser: http://localhost:port/api/SmsResource Notice the commented line: //SentlyPlusSmsSender.SendMessage The SentlyPlusSmsSender class is defined in the attached solution. We have shown this line as commented as we want to explain the core Angular concepts. If you load the attached solution, this line is uncommented in the source and an actual SMS will be sent! By default, the API returns XML. For consumption of the API in Angular, we would like it to return JSON. To change the default to JSON, we make the following change to WebApiConfig.cs file located in the App_Start folder. public static class WebApiConfig { public static void Register(HttpConfiguration config) { config.Routes.MapHttpRoute( name: "DefaultApi", routeTemplate: "api/{controller}/{id}", defaults: new { id = RouteParameter.Optional } ); var appXmlType = config.Formatters.XmlFormatter. SupportedMediaTypes. FirstOrDefault( t => t.MediaType == "application/xml"); config.Formatters.XmlFormatter.SupportedMediaTypes.Remove(appXmlType); } } We now have our backend REST Api which we can consume from Angular! 2. Creating the SmsView.html partial This simple partial will define two fields: the destination phone number (international format starting with a +) and the message. These fields will be bound to model.phoneNumber and model.message. We will also add a button that we shall hook up to sendMessage() in the controller. A list of all previously sent messages (bound to model.allMessages) will also be displayed below the form input. The following code shows the code for the partial: <!--­­ If model.errorMessage is defined, then render the error div -­­> <div class="alert alert-­danger alert-­dismissable" style="margin­-top: 30px;" ng­-show="model.errorMessage != undefined"> <button type="button" class="close" data­dismiss="alert" aria­hidden="true">&times;</button> <strong>Error!</strong> <br /> {{ model.errorMessage }} </div> <!--­­ The input fields bound to the model --­­> <div class="well" style="margin-­top: 30px;"> <table style="width: 100%;"> <tr> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Phone number (eg; +44 7778 609466)" ng­-model="model.phoneNumber" class="form-­control" style="width: 90%" onkeypress="return checkPhoneInput();" /> </td> <td style="width: 45%; text-­align: center;"> <input type="text" placeholder="Message" ng­-model="model.message" class="form-­control" style="width: 90%" /> </td> <td style="text-­align: center;"> <button class="btn btn-­danger" ng-­click="sendMessage();" ng-­disabled="model.isAjaxInProgress" style="margin­right: 5px;">Send</button> <img src="/Content/ajax-­loader.gif" ng­-show="model.isAjaxInProgress" /> </td> </tr> </table> </div> <!--­­ The past messages ­­--> <div style="margin-­top: 30px;"> <!­­-- The following div is shown if there are no past messages --­­> <div ng­-show="model.allMessages.length == 0"> No messages have been sent yet! </div> <!--­­ The following div is shown if there are some past messages --­­> <div ng-­show="model.allMessages.length == 0"> <table style="width: 100%;" class="table table-­striped"> <tr> <td>Phone Number</td> <td>Message</td> <td></td> </tr> <!--­­ The ng-­repeat directive is line the repeater control in .NET, but as you can see this partial is pure HTML which is much cleaner --> <tr ng-­repeat="message in model.allMessages"> <td>{{ message.to }}</td> <td>{{ message.message }}</td> <td> <button class="btn btn-­danger" ng-­click="delete(message.smsId);" ng­-disabled="model.isAjaxInProgress">Delete</button> </td> </tr> </table> </div> </div> The above code is commented and should be self explanatory. Conditional rendering is achieved through using the ng-­show=”condition” attribute on various div tags. Input fields are bound to the model and the send button is bound to the sendMessage() function in the controller as through the ng­click=”sendMessage()” attribute defined on the button tag. While AJAX calls are taking place, the controller sets model.isAjaxInProgress to true. Based on this variable, buttons are disabled through the ng-­disabled directive which is added as an attribute to the buttons. The ng-­repeat directive added as an attribute to the tr tag causes the table row to be rendered multiple times much like an ASP.NET repeater. 3. Creating the SmsController controller The penultimate piece of our application is the controller which responds to events from our view and interacts with our MVC4 REST WebAPI. The following listing shows the code we need to add to /app/js/controllers.js. Note that controller definitions can be chained. Also note that this controller “asks for” the $http service. The $http service is a simple way in Angular to do AJAX. So far we have only encountered modules, controllers, views and directives in Angular. The $http is new entity in Angular called a service. More information on Angular services can be found at the following URL: http://docs.angularjs.org/guide/dev_guide.services.understanding_services. .controller('SmsController', ['$scope', '$http', function ($scope, $http) { //We define the model $scope.model = {}; //We define the allMessages array in the model //that will contain all the messages sent so far $scope.model.allMessages = []; //The error if any $scope.model.errorMessage = undefined; //We initially load data so set the isAjaxInProgress = true; $scope.model.isAjaxInProgress = true; //Load all the messages $http({ url: '/api/smsresource', method: "GET" }). success(function (data, status, headers, config) { this callback will be called asynchronously //when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { //called asynchronously if an error occurs //or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); $scope.delete = function (id) { //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource/' + id, method: "DELETE" }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } $scope.sendMessage = function () { $scope.model.errorMessage = undefined; var message = ''; if($scope.model.message != undefined) message = $scope.model.message.trim(); if ($scope.model.phoneNumber == undefined || $scope.model.phoneNumber == '' || $scope.model.phoneNumber.length < 10 || $scope.model.phoneNumber[0] != '+') { $scope.model.errorMessage = "You must enter a valid phone number in international format. Eg: +44 7778 609466"; return; } if (message.length == 0) { $scope.model.errorMessage = "You must specify a message!"; return; } //We are making an ajax call so we set this to true $scope.model.isAjaxInProgress = true; $http({ url: '/api/smsresource', method: "POST", data: { to: $scope.model.phoneNumber, message: $scope.model.message } }). success(function (data, status, headers, config) { // this callback will be called asynchronously // when the response is available $scope.model.allMessages = data; //We are done with AJAX loading $scope.model.isAjaxInProgress = false; }). error(function (data, status, headers, config) { // called asynchronously if an error occurs // or server returns response with an error status. $scope.model.errorMessage = "Error occurred status:" + status // We are done with AJAX loading $scope.model.isAjaxInProgress = false; }); } }]); We can see from the previous listing how the functions that are called from the view are defined in the controller. It should also be evident how easy it is to make AJAX calls to consume our MVC4 REST WebAPI. Now we are left with the final piece. We need to define a route that associates a particular path with the view we have defined and the controller we have defined. 4. Add a new route that loads the controller and the partial This is the easiest part of the puzzle. We simply define another route in the /app/js/app.js file: $routeProvider.when('/sms', { templateUrl: '/app/partials/smsview.html', controller: 'SmsController' }); Conclusion In this article we have seen how much of the server side functionality in the MVC4 framework can be moved to the browser thus delivering a snappy and fast user interface. We have seen how we can build client side HTML only views that avoid the messy syntax offered by server side Razor views. We have built a functioning app from the ground up. The significant advantage of this approach to building web apps is that the front end can be completely platform independent. Even though we used ASP.NET to create our REST API, we could just easily have used any other language such as Node.js, Ruby etc without changing a single line of our front end code. Angular is a rich framework and we have only touched on basic functionality required to create a SPA. For readers who wish to delve further into the Angular framework, we would recommend the following URL as a starting point: http://docs.angularjs.org/misc/started. To get started with the code for this project: Sign up for an account at http://plus.sent.ly (free) Add your phone number Go to the “My Identies Page” Note Down your Sender ID, Consumer Key and Consumer Secret Download the code for this article at: https://docs.google.com/file/d/0BzjEWqSE31yoZjZlV0d0R2Y3eW8/edit?usp=sharing Change the values of Sender Id, Consumer Key and Consumer Secret in the web.config file Run the project through Visual Studio!

    Read the article

  • The Inkremental Architect&acute;s Napkin - #4 - Make increments tangible

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/12/the-inkremental-architectacutes-napkin---4---make-increments-tangible.aspxThe driver of software development are increments, small increments, tiny increments. With an increment being a slice of the overall requirement scope thin enough to implement and get feedback from a product owner within 2 days max. Such an increment might concern Functionality or Quality.[1] To make such high frequency delivery of increments possible, the transition from talking to coding needs to be as easy as possible. A user story or some other documentation of what´s supposed to get implemented until tomorrow evening at latest is one side of the medal. The other is where to put the logic in all of the code base. To implement an increment, only logic statements are needed. Functionality like Quality are just about expressions and control flow statements. Think of Assembler code without the CALL/RET instructions. That´s all is needed. Forget about functions, forget about classes. To make a user happy none of that is really needed. It´s just about the right expressions and conditional executions paths plus some memory allocation. Automatic function inlining of compilers which makes it clear how unimportant functions are for delivering value to users at runtime. But why then are there functions? Because they were invented for optimization purposes. We need them for better Evolvability and Production Efficiency. Nothing more, nothing less. No software has become faster, more secure, more scalable, more functional because we gathered logic under the roof of a function or two or a thousand. Functions make logic easier to understand. Functions make us faster in producing logic. Functions make it easier to keep logic consistent. Functions help to conserve memory. That said, functions are important. They are even the pivotal element of software development. We can´t code without them - whether you write a function yourself or not. Because there´s always at least one function in play: the Entry Point of a program. In Ruby the simplest program looks like this:puts "Hello, world!" In C# more is necessary:class Program { public static void Main () { System.Console.Write("Hello, world!"); } } C# makes the Entry Point function explicit, not so Ruby. But still it´s there. So you can think of logic always running in some function. Which brings me back to increments: In order to make the transition from talking to code as easy as possible, it has to be crystal clear into which function you should put the logic. Product owners might be content once there is a sticky note a user story on the Scrum or Kanban board. But developers need an idea of what that sticky note means in term of functions. Because with a function in hand, with a signature to run tests against, they have something to focus on. All´s well once there is a function behind whose signature logic can be piled up. Then testing frameworks can be used to check if the logic is correct. Then practices like TDD can help to drive the implementation. That´s why most code katas define exactly how the API of a solution should look like. It´s a function, maybe two or three, not more. A requirement like “Write a function f which takes this as parameters and produces such and such output by doing x” makes a developer comfortable. Yes, there are all kinds of details to think about, like which algorithm or technology to use, or what kind of state and side effects to consider. Even a single function not only must deliver on Functionality, but also on Quality and Evolvability. Nevertheless, once it´s clear which function to put logic in, you have a tangible starting point. So, yes, what I´m suggesting is to find a single function to put all the logic in that´s necessary to deliver on a the requirements of an increment. Or to put it the other way around: Slice requirements in a way that each increment´s logic can be located under the roof of a single function. Entry points Of course, the logic of a software will always be spread across many, many functions. But there´s always an Entry Point. That´s the most important function for each increment, because that´s the root to put integration or even acceptance tests on. A batch program like the above hello-world application only has a single Entry Point. All logic is reached from there, regardless how deep it´s nested in classes. But a program with a user interface like this has at least two Entry Points: One is the main function called upon startup. The other is the button click event handler for “Show my score”. But maybe there are even more, like another Entry Point being a handler for the event fired when one of the choices gets selected; because then some logic could check if the button should be enabled because all questions got answered. Or another Entry Point for the logic to be executed when the program is close; because then the choices made should be persisted. You see, an Entry Point to me is a function which gets triggered by the user of a software. With batch programs that´s the main function. With GUI programs on the desktop that´s event handlers. With web programs that´s handlers for URL routes. And my basic suggestion to help you with slicing requirements for Spinning is: Slice them in a way so that each increment is related to only one Entry Point function.[2] Entry Points are the “outer functions” of a program. That´s where the environment triggers behavior. That´s where hardware meets software. Entry points always get called because something happened to hardware state, e.g. a key was pressed, a mouse button clicked, the system timer ticked, data arrived over a wire.[3] Viewed from the outside, software is just a collection of Entry Point functions made accessible via buttons to press, menu items to click, gestures, URLs to open, keys to enter. Collections of batch processors I´d thus say, we haven´t moved forward since the early days of software development. We´re still writing batch programs. Forget about “event-driven programming” with its fancy GUI applications. Software is just a collection of batch processors. Earlier it was just one per program, today it´s hundreds we bundle up into applications. Each batch processor is represented by an Entry Point as its root that works on a number of resources from which it reads data to process and to which it writes results. These resources can be the keyboard or main memory or a hard disk or a communication line or a display. Together many batch processors - large and small - form applications the user perceives as a single whole: Software development that way becomes quite simple: just implement one batch processor after another. Well, at least in principle ;-) Features Each batch processor entered through an Entry Point delivers value to the user. It´s an increment. Sometimes its logic is trivial, sometimes it´s very complex. Regardless, each Entry Point represents an increment. An Entry Point implemented thus is a step forward in terms of Agility. At the same time it´s a tangible unit for developers. Therefore, identifying the more or less numerous batch processors in a software system is a rewarding task for product owners and developers alike. That´s where user stories meet code. In this example the user story translates to the Entry Point triggered by clicking the login button on a dialog like this: The batch then retrieves what has been entered via keyboard, loads data from a user store, and finally outputs some kind of response on the screen, e.g. by displaying an error message or showing the next dialog. This is all very simple, but you see, there is not just one thing happening, but several. Get input (email address, password) Load user for email address If user not found report error Check password Hash password Compare hash to hash stored in user Show next dialog Viewed from 10,000 feet it´s all done by the Entry Point function. And of course that´s technically possible. It´s just a bunch of logic and calling a couple of API functions. However, I suggest to take these steps as distinct aspects of the overall requirement described by the user story. Such aspects of requirements I call Features. Features too are increments. Each provides some (small) value of its own to the user. Each can be checked individually by a product owner. Instead of implementing all the logic behind the Login() entry point at once you can move forward increment by increment, e.g. First implement the dialog, let the user enter any credentials, and log him/her in without any checks. Features 1 and 4. Then hard code a single user and check the email address. Features 2 and 2.1. Then check password without hashing it (or use a very simple hash like the length of the password). Features 3. and 3.2 Replace hard coded user with a persistent user directoy, but a very simple one, e.g. a CSV file. Refinement of feature 2. Calculate the real hash for the password. Feature 3.1. Switch to the final user directory technology. Each feature provides an opportunity to deliver results in a short amount of time and get feedback. If you´re in doubt whether you can implement the whole entry point function until tomorrow night, then just go for a couple of features or even just one. That´s also why I think, you should strive for wrapping feature logic into a function of its own. It´s a matter of Evolvability and Production Efficiency. A function per feature makes the code more readable, since the language of requirements analysis and design is carried over into implementation. It makes it easier to apply changes to features because it´s clear where their logic is located. And finally, of course, it lets you re-use features in different context (read: increments). Feature functions make it easier for you to think of features as Spinning increments, to implement them independently, to let the product owner check them for acceptance individually. Increments consist of features, entry point functions consist of feature functions. So you can view software as a hierarchy of requirements from broad to thin which map to a hierarchy of functions - with entry points at the top.   I like this image of software as a self-similar structure on many levels of abstraction where requirements and code match each other. That to me is true agile design: the core tenet of Agility to move forward in increments is carried over into implementation. Increments on paper are retained in code. This way developers can easily relate to product owners. Elusive and fuzzy requirements are not tangible. Software production is moving forward through requirements one increment at a time, and one function at a time. In closing Product owners and developers are different - but they need to work together towards a shared goal: working software. So their notions of software need to be made compatible, they need to be connected. The increments of the product owner - user stories and features - need to be mapped straightforwardly to something which is relevant to developers. To me that´s functions. Yes, functions, not classes nor components nor micro services. We´re talking about behavior, actions, activities, processes. Their natural representation is a function. Something has to be done. Logic has to be executed. That´s the purpose of functions. Later, classes and other containers are needed to stay on top of a growing amount of logic. But to connect developers and product owners functions are the appropriate glue. Functions which represent increments. Can there always be such a small increment be found to deliver until tomorrow evening? I boldly say yes. Yes, it´s always possible. But maybe you´ve to start thinking differently. Maybe the product owner needs to start thinking differently. Completion is not the goal anymore. Neither is checking the delivery of an increment through the user interface of a software. Product owners need to become comfortable using test beds for certain features. If it´s hard to slice requirements thin enough for Spinning the reason is too little knowledge of something. Maybe you don´t yet understand the problem domain well enough? Maybe you don´t yet feel comfortable with some tool or technology? Then it´s time to acknowledge this fact. Be honest about your not knowing. And instead of trying to deliver as a craftsman officially become a researcher. Research an check back with the product owner every day - until your understanding has grown to a level where you are able to define the next Spinning increment. ? Sometimes even thin requirement slices will cover several Entry Points, like “Add validation of email addresses to all relevant dialogs.” Validation then will it put into a dozen functons. Still, though, it´s important to determine which Entry Points exactly get affected. That´s much easier, if strive for keeping the number of Entry Points per increment to 1. ? If you like call Entry Point functions event handlers, because that´s what they are. They all handle events of some kind, whether that´s palpable in your code or note. A public void btnSave_Click(object sender, EventArgs e) {…} might look like an event handler to you, but public static void Main() {…} is one also - for then event “program started”. ?

    Read the article

  • Our Oracle Recruitment Team is Growing - Multiple Job Opportunities in Bangalore, India

    - by david.talamelli
    DON"T GET STUCK IN THE MATRIXSEE YOUR FUTUREVISIT THE ORACLE The position(s): CORPORATE RECRUITING RESEARCH ANALYST(S) ABOUT ORACLE Oracle's business is information--how to manage it, use it, share it, protect it. For three decades, Oracle, the world's largest enterprise software company, has provided the software and services that allow organizations to get the most up-to-date and accurate information from their business systems. Only Oracle powers the information-driven enterprise by offering a complete, integrated solution for every segment of the process industry. When you run Oracle applications on Oracle technology, you speed implementation, optimize performance, and maximize ROI. Great hiring doesn't happen by accident; it's the culmination of a series of thoughtfully planned and well executed events. At the core of any hiring process is a sourcing strategy. This is where you come in... Do you want to be a part of a world-class recruiting organization that's on the cutting edge of technology? Would you like to experience a rewarding work environment that allows you to further develop your skills, while giving you the opportunity to develop new skills? If you answered yes, you've taken your first step towards a future with Oracle. We are building a Research Team to support our North America Recruitment Team, and we need creative, smart, and ambitious individuals to help us drive our research department forward. Oracle has a track record for employing and developing the very best in the industry. We invest generously in employee development, training and resources. Be a part of the most progressive internal recruiting team in the industry. For more information about Oracle, please visit our Web site at http://www.oracle.com Escape the hum drum job world matrix, visit the Oracle and be a part of a winning team, apply today. POSITION: Corporate Recruiting Research Analyst LOCATION: Bangalore, India RESPONSIBILITIES: •Develop candidate pipeline using Web 2.0 sourcing strategies and advanced Boolean Search techniques to support U.S. Recruiting Team for various job functions and levels. •Engage with assigned recruiters to understand the supported business as well as the recruiting requirements; partner with recruiters to meet expectations and deliver a qualified pipeline of candidates. •Source candidates to include both active and passive job seekers to provide a strong pipeline of qualified candidates for each recruiter; exercise creativity to find candidates using Oracle's advanced sourcing tools/techniques. •Fully evaluate candidate's background against the requirements provided by recruiter, and process leads using ATS (Applicant Tracking System). •Manage your efforts efficiently; maintain the highest levels of client satisfaction as well as strong operations and reporting of research activities. PREFERRED QUALIFICATIONS: •Fluent in English, with excellent written and oral communication skills. •Undergraduate degree required, MBA or Masters preferred. •Proficiency with Boolean Search techniques desired. •Ability to learn new software applications quickly. •Must be able to accommodate some U.S. evening hours. •Strong organization and attention to detail skills. •Prior HR or corporate in-house recruiting experiences a plus. •The fire in the belly to learn new ideas and succeed. •Ability to work in team and individual environments. This is an excellent opportunity to join Oracle in our Bangalore Offices. Interested applicants can send their resume to [email protected] or contact David on +61 3 8616 3364

    Read the article

  • Month in Geek: December 2010 Edition

    - by Asian Angel
    As 2010 draws to a close, we have gathered together another great batch of article goodness for your reading enjoyment. Here are our ten hottest articles for December. Note: Articles are listed as #10 through #1. The 50 Best How-To Geek Windows Articles of 2010 Even though we cover plenty of other topics, Windows has always been a primary focus around here, and we’ve got one of the largest collections of Windows-related how-to articles anywhere. Here’s the fifty best Windows articles that we wrote in 2010. Read the article Desktop Fun: Happy New Year Wallpaper Collection [Bonus Edition] As this year draws to a close, it is a time to reflect back on what we have done this year and to look forward to the new one. To help commemorate the event we have put together a bonus size edition of Happy New Year wallpapers for your desktops. Read the article LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology With image technology progressing faster than ever, High-Def has become the standard, giving TV buyers more options at cheaper prices. But what’s different in all these confusing TVs, and what should you know before buying one? Read the article HTG Explains: Which Linux File System Should You Choose? File systems are one of the layers beneath your operating system that you don’t think about—unless you’re faced with the plethora of options in Linux. Here’s how to make an educated decision on which file system to use. Read the article Desktop Fun: Merry Christmas Fonts Christmas will soon be here and there are lots of cards, invitations, gift tags, photos, and more to prepare beforehand. To help you get ready we have gathered together a great collection of fun holiday fonts to help turn those ordinary looking holiday items into extraordinary looking ones. Read the article Microsoft Security Essentials 2.0 Kills Viruses Dead. Download It Now. Microsoft’s Security Essentials has been our favorite anti-malware application for a while—it’s free, unobtrusive, and it doesn’t slow your PC down, but now it’s even better with the new 2.0 release, which adds network filtering, heuristic protection, and more. Read the article 20 OS X Keyboard Shortcuts You Might Not Know Mastering the keyboard will not only increase your navigation speed but it can also help with wrist fatigue. Here are some lesser known OS X shortcuts to help you become a keyboard ninja. Read the article 20 Windows Keyboard Shortcuts You Might Not Know Mastering the keyboard will not only increase your navigation speed but it can also help with wrist fatigue. Here are some lesser known Windows shortcuts to help you become a keyboard ninja. Read the article The 50 Best Registry Hacks that Make Windows Better We’re big fans of hacking the Windows Registry around here, and we’ve got one of the biggest collections of registry hacks you’ll find. Don’t believe us? Here’s a list of the top 50 registry hacks that we’ve covered. Read the article The Complete List of iPad Tips, Tricks, and Tutorials The Apple iPad is an amazing tablet, and to help you get the most out of it, we’ve put together a comprehensive list of every tip, trick, and tutorial for you. Read on for more. Read the article Latest Features How-To Geek ETC The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Tune Pop Enhances Android Music Notifications Another Busy Night in Gotham City Wallpaper Classic Super Mario Brothers Theme for Chrome and Iron Experimental Firefox Builds Put Tabs on the Title Bar (Available for Download) Android Trojan Found in the Wild Chaos, Panic, and Disorder Wallpaper

    Read the article

  • The Beginner’s Guide to Greasemonkey User Scripts in Firefox

    - by Asian Angel
    Everybody knows that Firefox has add-ons for virtually everything, but if you don’t want to bloat your installation you’ve always got the option of Greasemonkey scripts instead. Here’s a quick primer on how to use them. Getting Started with User Scripts Once you have Greasemonkey installed, managing the extension is really easy. Left click on the status bar icon to turn the extension on/off and right click to access the context menu shown here. Whether you use the Options button in the Add-ons Manager Window or the context menu shown above, both will bring up the Manage User Scripts dialog. At the moment you have a nice clean slate to work with… time to get some scripts added in. The majority of user scripts can be found at two different sites, the first being appropriately named userscripts.org, and you can either browse by tag or search for a script. As you can see here your search for a particular type of script can be quickly narrowed down based on category. There is definitely a lot to choose from. For our example we focused on the “textarea” tag. There were 62 scripts available but we quickly found what we were looking for on the first page. Installing, Managing, & Using Your Scripts When you find a script that you want to install visit the script’s homepage and click on the “Install” button. Note: Link for this script provided below. Once you have clicked on the Install button, Greasemonkey will open up the following installation window. You will be able to view: A summary of what the script does A list of websites that the script is supposed to function on (our example is set for all) View the script source if desired Make a final decision on whether to install the script or cancel the process Right-clicking on our status bar icon shows our new script listed and active. Reopening the Manage User Scripts window shows: Our new script listed in the column on the left The websites/pages included An option to disable the script (can also be done in the context menu) The ability to edit the script The ability to uninstall the script If you choose to edit the script you will be asked to browse for and select a default text editor of your choice (first time only). Once you have selected a text editor you can make any changes desired to the script. We decided to test our new user script on the site. Going to the comment box at the bottom we could easily resize the window as desired. The Comment box definitely got a lot bigger. Conclusion If you prefer to keep the number of extensions to a minimum in your Firefox installation then Greasemonkey and the Userscripts website can easily provide that extra functionality without the bloat. For added auto website script detection goodness see our article on Greasefire. Note: See our article here for specialized How-To Geek User Style Scripts that can be added to Greasemonkey. Links Download the Greasemonkey Extension (Mozilla Add-ons) Install the Textarea & Input Resize User Script Visit the Userscripts.org Website Visit the Userstyles.org Website Similar Articles Productive Geek Tips Enjoy How-To Geek User Style Script GoodnessEnable Multi-Column Google Searches with a User ScriptSearch Alternative Search Engines from within Bing’s Search PageFind User Scripts for Your Favorite Websites the Easy WaySet Up User Scripts in Opera Browser TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 reviewed in depth by Ed Bott FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7

    Read the article

  • Something for the weekend - Whats the most complex query?

    - by simonsabin
    Whenever I teach about SQL Server performance tuning I try can get across the message that there is no such thing as a table. Does that sound odd, well it isn't, trust me. Rather than tables you need to consider structures. You have 1. Heaps 2. Indexes (b-trees) Some people split indexes in two, clustered and non-clustered, this I feel confuses the situation as people associate clustered indexes with sorting, but don't associate non clustered indexes with sorting, this is wrong. Clustered and non-clustered indexes are the same b-tree structure(and even more so with SQL 2005) with the leaf pages sorted in a linked list according to the keys of the index.. The difference is that non clustered indexes include in their structure either, the clustered key(s), or the row identifier for the row in the table (see http://sqlblog.com/blogs/kalen_delaney/archive/2008/03/16/nonclustered-index-keys.aspx for more details). Beyond that they are the same, they have key columns which are stored on the root and intermediary pages, and included columns which are on the leaf level. The reason this is important is that this is how the optimiser sees the world, this means it can use any of these structures to resolve your query. Even if your query only accesses one table, the optimiser can access multiple structures to get your results. One commonly sees this with a non-clustered index scan and then a key lookup (clustered index seek), but importantly it's not restricted to just using one non-clustered index and the clustered index or heap, and that's the challenge for the weekend. So the challenge for the weekend is to produce the most complex single table query. For those clever bods amongst you that are thinking, great I will just use lots of xquery functions, sorry these are the rules. 1. You have to use a table from AdventureWorks (2005 or 2008) 2. You can add whatever indexes you like, but you must document these 3. You cannot use XQuery, Spatial, HierarchyId, Full Text or any open rowset function. 4. You can only reference your table once, i..e a FROM clause with ONE table and no JOINs 5. No Sub queries. The aim of this is to show how the optimiser can use multiple structures to build the results of a query and to also highlight why the optimiser is doing that. How many structures can you get the optimiser to use? As an example create these two indexes on AdventureWorks2008 create index IX_Person_Person on Person.Person (lastName, FirstName,NameStyle,PersonType) create index IX_Person_Person on Person.Person(BusinessentityId,ModifiedDate)with drop_existing    select lastName, ModifiedDate   from Person.Person  where LastName = 'Smith' You will see that the optimiser has decided to not access the underlying clustered index of the table but to use two indexes above to resolve the query. This highlights how the optimiser considers all storage structures, clustered indexes, non clustered indexes and heaps when trying to resolve a query. So are you up to the challenge for the weekend to produce the most complex single table query? The prize is a pdf version of a popular SQL Server book, or a physical book if you live in the UK.  

    Read the article

  • SharePoint.DesignFactory.ContentFiles–building WCM sites

    - by svdoever
    One of the use cases where we use the SharePoint.DesignFactory.ContentFiles tooling is in building SharePoint Publishing (WCM) solutions for SharePoint 2007, SharePoint 2010 and Office365. Publishing solutions are often solutions that have one instance, the publishing site (possibly with subsites), that in most cases need to go through DTAP. If you dissect a publishing site, in most case you have the following findings: The publishing site spans a site collection The branding of the site is specified in the root site, because: Master pages live in the root site (/_catalogs/masterpage) Page layouts live in the root site (/_catalogs/masterpage) The style library lives in the root site ( /Style Library) and contains images, css, javascript, xslt transformations for your CQWP’s, … Preconfigured web parts live in the root site (/_catalogs/wp) The root site and subsites contains a document library called Pages (or your language-specific version of it) containing publishing pages using the page layouts and master pages The site collection contains content types, fields and lists When using the SharePoint.DesignFactory.ContentFiles tooling it is very easy to create, test, package and deploy the artifacts that can be uploaded to the SharePoint content database. This can be done in a fast and simple way without the need to create and deploy WSP packages. If we look at the above list of artifacts we can use SharePoint.DesignFactory.ContentFiles for master pages, page layouts, the style library, web part configurations, and initial publishing pages (these are normally made through the SharePoint web UI). Some artifacts like content types, fields and lists in the above list can NOT be handled by SharePoint.DesignFactory.ContentFiles, because they can’t be uploaded to the SharePoint content database. The good thing is that these artifacts are the artifacts that don’t change that much in the development of a SharePoint Publishing solution. There are however multiple ways to create these artifacts: Use paper script: create them manually in each of the environments based on documentation Automate the creation of the artifacts using (PowerShell) script Develop a WSP package to create these artifacts I’m not a big fan of the third option (see my blog post Thoughts on building deployable and updatable SharePoint solutions). It is a lot of work to create content types, fields and list definitions using all kind of XML files, and it is not allowed to modify these artifacts when in use. I know… SharePoint 2010 has some content type upgrade possibilities, but I think it is just too cumbersome. The first option has the problem that content types and fields get ID’s, and that these ID’s must be used by the metadata on for example page layouts. No problem for SharePoint.DesignFactory.ContentFiles, because it supports deploy-time resolving of these ID’s using PowerShell. For example consider the following metadata definition for the page layout contactpage-wcm.aspx.properties.ps1: Metadata page layout # This script must return a hashtable @{ name=value; ... } of field name-value pairs # for the content file that this script applies to. # On deployment to SharePoint, these values are written as fields in the corresponding list item (if any) # Note that fields must exist; they can be updated but not created or deleted. # This script is called right after the file is deployed to SharePoint.   # You can use the script parameters and arbitrary PowerShell code to interact with SharePoint. # e.g. to calculate properties and values at deployment time.   param([string]$SourcePath, [string]$RelativeUrl, $Context) @{     "ContentTypeId" = $Context.GetContentTypeID('GeneralPage');     "MasterPageDescription" = "Cloud Aviator Contact pagelayout (wcm - don't use)";     "PublishingHidden" = "1";     "PublishingAssociatedContentType" = $Context.GetAssociatedContentTypeInfo('GeneralPage') } The PowerShell functions GetContentTypeID and GetAssociatedContentTypeInfo can at deploy-time resolve the required information from the server we are deploying to. I personally prefer the second option: automate creation through PowerShell, because there are PowerShell scripts available to export content types and fields. An example project structure for a typical SharePoint WCM site looks like: Note that this project uses DualLayout. So if you build Publishing sites using SharePoint, checkout out the completely free SharePoint.DesignFactory.ContentFiles tooling and start flying!

    Read the article

  • Oracle SQL Developer Data Modeler: What Tables Aren’t In At Least One SubView?

    - by thatjeffsmith
    Organizing your data model makes the information easier to consume. One of the organizational tools provided by Oracle SQL Developer Data Modeler is the ‘SubView.’ In a nutshell, a SubView is a subset of your model. The Challenge: I’ve just created a model which represents my entire ____________ application. We’ll call it ‘residential lending.’ Instead of having all 100+ tables in a single model diagram, I want to break out the tables by module, e.g. appraisals, credit reports, work histories, customers, etc. I’ve spent several hours breaking out the tables to one or more SubViews, but I think i may have missed a few. Is there an easy way to see what tables aren’t in at least ONE subview? The Answer Yes, mostly. The mostly comes about from the way I’m going to accomplish this task. It involves querying the SQL Developer Data Modeler Reporting Schema. So if you don’t have the Reporting Schema setup, you’ll need to do so. Got it? Good, let’s proceed. Before you start querying your Reporting Schema, you might need a data model for the actual reporting schema…meta-meta data! You could reverse engineer the data modeler reporting schema to a new data model, or you could just reference the PDFs in \datamodeler\reports\Reporting Schema diagrams directory. Here’s a hint, it’s THIS one The Query Well, it’s actually going to be at least 2 queries. We need to get a list of distinct designs stored in your repository. For giggles, I’m going to get a listing including each version of the model. So I can query based on design and version, or in this case, timestamp of when it was added to the repository. We’ll get that from the DMRS_DESIGNS table: SELECT DISTINCT design_name, design_ovid, date_published FROM DMRS_designs Then I’m going to feed the design_ovid, down to a subquery for my child report. select name, count(distinct diagram_id) from DMRS_DIAGRAM_ELEMENTS where design_ovid = :dESIGN_OVID and type = 'Table' group by name having count(distinct diagram_id) < 2 order by count(distinct diagram_id) desc Each diagram element has an entry in this table, so I need to filter on type=’Table.’ Each design has AT LEAST one diagram, the master diagram. So any relational table in this table, only having one listing means it’s not in any SubViews. If you have overloaded object names, which is VERY possible, you’ll want to do the report off of ‘OBJECT_ID’, but then you’ll need to correlate that to the NAME, as I doubt you’re so intimate with your designs that you recognize the GUIDs So I’m going to cheat and just stick with names, but I think you get the gist. My Model Of my almost 90 tables, how many of those have I not added to at least one SubView? Now let’s run my report! Voila! My ‘BEER2′ table isn’t in any SubView! It says ’1′ because the main model diagram counts as a view. So if the count came back as ’2′, that would mean the table was in the main model diagram and in 1 SubView diagram. And I know what you’re thinking, what kind of residential lending program would have a table called ‘BEER2?’ Let’s just say, that my business model has some kinks to work out!

    Read the article

  • Change a File Type’s Icon in Windows 7

    - by Trevor Bekolay
    In Windows XP, you could change the icon associated with a file type in Windows Explorer. In Windows 7, you have to do some registry hacking to change a file type’s icon. We’ll show you a much easier and faster method for Windows 7. File Types Manager File Types Manager is a great little utility from NirSoft that includes the functionality of Windows XP’s folder options and adds a whole lot more. It works great in Windows 7, and its interface makes it easy to change a bunch of related file types at once. A common problem we run into are icons that look too similar. You have to look for a few seconds to see the difference between the movies and the text files. Let’s change the icon for the movie files to make visually scanning through directories much easier. Open up File Types Manager. Find the “Default Icon” column and click on it to sort the list by the Default Icon. (We’ve hidden a bunch of columns we don’t need, so you may find it to be farther to the right.) This groups together all file extensions that already have the same icon. This is convenient because we want to change the icon of all video files, which at the moment all have the same default icon. Click the “Find” button on the toolbar, of press Ctrl+F. Type in a file type that you want to change. Note that all of the extensions with the same default icon are grouped together. Right click on the first extension whose icon you want to change and click on Edit Selected File Type, or select the first extension and press F2. Click the “…” button next to the Default Icon text field. Click on the Browse… button. File Types Manager allows you to select .exe, .dll, or .ico files. In our case, we have a .ico file that we took from the wonderful public domain Tango icon library. Select the appropriate icon (if you’re using a .exe or .dll there could be many possible icons) then click OK. Repeat this process for each extension whose icon you would like to change. Now it’s much easier to see at a glance which files are movies and which are text files! Of course, this process will work for any file type, so customize your files’ icons as you see fit. Download File Types Manager from NirSoft for Windows Similar Articles Productive Geek Tips Change the Default Editor for Batch Files in VistaCustomizing Your Icons in Windows XPChange Your Windows 7 Library Icons the Easy WayRestore Missing Desktop Icons in Windows 7 or VistaCustomize Your Folder Icons in Windows XP TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor Add Multiple Tabs to Office Programs The Wearing of the Green – St. Patrick’s Day Theme (Firefox) Perform a Background Check on Yourself

    Read the article

  • Customize Team Build 2010 – Part 12: How to debug my custom activities

    In the series the following parts have been published Part 1: Introduction Part 2: Add arguments and variables Part 3: Use more complex arguments Part 4: Create your own activity Part 5: Increase AssemblyVersion Part 6: Use custom type for an argument Part 7: How is the custom assembly found Part 8: Send information to the build log Part 9: Impersonate activities (run under other credentials) Part 10: Include Version Number in the Build Number Part 11: Speed up opening my build process template Part 12: How to debug my custom activities Part 13: Get control over the Build Output Part 14: Execute a PowerShell script Part 15: Fail a build based on the exit code of a console application       Developers are “spoilt” persons who expect to be able to have easy debugging experiences for every technique they work with. So they also expect it when developing custom activities for the build process template. This post describes how you can debug your custom activities without having to develop on the build server itself. Remote debugging prerequisites The prerequisite for these steps are to install the Microsoft Visual Studio Remote Debugging Monitor. You can find information how to install this at http://msdn.microsoft.com/en-us/library/bt727f1t.aspx. I chose for the option to run the remote debugger on the build server from a file share. Debugging symbols prerequisites To be able to start the debugging, you need to have the pdb files on the buildserver together with the assembly. The pdb must have been build with Full Debug Info. Steps In my setup I have a development machine and a build server. To setup the remote debugging, I performed the following steps Locate on your development machine the folder C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\Remote Debugger Create a share for the Remote Debugger folder. Make sure that the share (and the folder) has the correct permissions so the user on the build server has access to the share. On the build server go to the shared “Remote Debugger” folder Start msvsmon.exe which is located in the folder that represents the platform of the build server. This will open a winform application like   Go back to your development machine and open the BuildProcess solution. Start the Attach to process command (Ctrl+Alt+P) Type in the Qualifier the name of the build server. In my case the user account that has started the msvsmon is another user then the user on my development machine. In that case you have to type the qualifier in the format that is shown in the Remote Debugging Monitor (in my case LOCAL\Administrator@TFSLAB) and confirm it by pressing <Enter> Since the build service is running with other credentials, check the option “Show processes from all users”. Now the Attach to process dialog shows the TFSBuildServiceHost process Set the breakpoint in the activity you want to debug and kick of a build. Be aware that when you attach to the TFSBuildServiceHost that you debug every single build that is run by this windows service, so make sure you don’t debug the build server that is in production! You can download the full solution at BuildProcess.zip. It will include the sources of every part and will continue to evolve.

    Read the article

  • Backup and Transfer Foobar2000 to a New Computer

    - by Mysticgeek
    If you are a fan of Foobar2000 you undoubtedly have tweaked it to the point where you don’t want to set it all up again on a new machine. Here we look at how to transfer Foobar2000 settings to a new Windows 7 machine. Note: For this article we are transferring Foobar2000 settings from on Windows 7 machine to another over a network running Windows Home Server.  Foobar2000 Foobar2000 is an awesome music player which is highly customizable and we’ve previously covered. Here we take a look at how it’s set up on the current machine. It’s a nothing flashy, but is set up for our needs and includes a lot of components and playlists.   Backup Files Rather than wasting time setting everything up again on a new machine, we can backup the important files and replace them on the new machine. First type or copy the following into the Explorer address bar. %appdata%\foobar2000 Now copy all of the files in the folder and store them on a network drive or some type removable media or device. New Machine Now you can install the latest version of Foobar2000 on your new machine. You can go with a Standard install as we will be replacing our backed up configuration files anyway. When it launches, it will be set with all the defaults…and we want what we had back. Browse to the following on the new machine… %appdata%\foobar2000 Delete all of the files in this directory… Then replace them with the ones we backed up from the other machine. You’ll also want to navigate to C:\Program Files\Foobar2000 and replace the existing Components folder with the backed up one. When you get the screen telling you there is already files of the same name, select Move and Replace, and check the box Do this for the next 6 conflicts. Now we’re back in business! Everything is exactly as it was on the old machine. In this example, we were moving the Foobar2000 files from a computer on the same home network. All the music is coming from a directory on our Windows Home Server so they hadn’t changed. If you’re moving these files to a computer on another machine… say your work computer, you’ll need to adjust where the music folders point to. Windows XP If you’re setting up Foobar2000 on an XP machine, you can enter the following into the Run line. %appdata%\foobar2000 Then copy your backed up files into the Foobar2000 folder, and remember to swap out the Components folder in C:\Program Files\Foobar2000. Confirm to replace the files and folders by clicking Yes to All… Conclusion This method worked perfectly for us on our home network setup. There might be some other things that will need a bit of tweaking, but overall the process is quick and easy. There is a lot of cool things you can do with Foobar2000 like rip an audio CD to FlAC. If you’re a fan of Foobar2000 or considering switching to it, we will be covering more awesome features in future articles. Download Foobar2000 – Windows Only Similar Articles Productive Geek Tips Backup or Transfer Microsoft Office 2007 Quick Parts Between ComputersBackup and Restore Internet Explorer’s Trusted Sites ListSecond Copy 7 [Review]Backup and Restore Firefox Profiles EasilyFoobar2000 is a Fully Customizable Music Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos Video preview of new Windows Live Essentials 21 Cursor Packs for XP, Vista & 7

    Read the article

  • SWFObject and IE6 causing hair-pulling agony

    - by Piet
    I recently used SWFObject to display a flash header on a website. I chose SWFObject because: Instead of displaying an annoying ‘Install flash now’ message, it claims to be able to show alternate content. In this case: the original header image. It claims to be compatible with more or less every browser out there. Implementation went fine, until someone tested it on IE6 and got the following error: Internet explorer cannot open the Internet site http://www….. Operation aborted Which basically means that the site just can’t be visited with IE6 (still used a lot in business environments), it even seems as if there’s something wrong with your internet connection. Now, since about 10% of visitors to this site are still using IE6 (why does everyone still use Internet Explorer ???? Do YOU know that these days most people do NOT use Internet Explorer anymore ?) Now after some googling, I found the suggestion to defer loading of the SWFObject.js as follows: <script type="text/javascript" defer=”defer” src=”http://ajax.googleapis.com/ajax/libs/swfobject/2.2/swfobject.js” </script> <script type=”text/javascript” defer=”defer” swfobject.registerObject(”myId”, “9″, “”); </script> What this does according to W3C: When set, this boolean attribute provides a hint to the user agent that the script is not going to generate any document content (e.g., no “document.write” in javascript) and thus, the user agent can continue parsing and rendering. I don’t know exactly why, but: HURRAY! It works now!!! Only… IE6 and IE7 (didn’t try IE8) now gave the following error: Line: 19 Char: 1 Error: ’swfobject’ is undefined Code: 0 URL: http://www… But the flash was still running fine. Still, such an error isn’t clean, especially since almost half of the site’s visitors are using one of these Internet Explorer versions. Now, wanting a quick fix I decided to do the following: <script type="text/javascript" defer="defer" if (typeof(swfobject) != "undefined") swfobject.registerObject("myId", "9", ""); </script> I admit this is a bit of a weird ‘fix’. You’d suspect the flash to stop working on IE6/IE7, which it doesn’t. Not planning on diving into it’s inner bowels, I regard this a ‘mission accomplished’ until someone somewhere posts a better solution (for which I setup some Google alerts). Do you have a better solution? What would be the impact on the webdev economy (or your life) if all browsers were compatible? Addendum Because the above turned out not to work with the new Firefox 3.5.3 (strangely, was OK with 3.5.2 when I tested it) I decided to cut the crap and use the ‘Dynamic Publishing’ way. Ok, so it won’t work for people who have javascript disabled, but who on earth would have flash installed AND javascript disabled? To avoid the IE6 error with the ‘Dynamic Publishing’ way, I call swfobject.embedSWF right after the div that will be replaced with the flash content. Calling swfobject.embedSWF in the <head> would otherwise give me the above error in IE6 again.

    Read the article

  • Drupal Modules for SEO & Content

    - by Aditi
    When we talk about Drupal SEO, there are two things to consider one is about the relevant SEO practices and about appropriate Drupal Modules available. Optimizing your website for search engines is one of the most important aspect of launching & promoting your website especially if ranking matters to you. Understanding SEO For starters, you have begin with Keyword research and then optimize your content according to your findings by tagging, meta tags etc, Drupal modules once installed help you manage a lot of such parameters. Identifying the target keywords Using the Page Title and Token modules PathAuto configuration <H1> heading tags Optimizing Drupal’s default robots.txt file Etc. While Drupal gives you a lot of ability to make your website content worthy & search engine friendly it is important for you to make sure you are not crossing the line or you could get penalized. Modules Overview Drupal Power is at its best when you have these modules & great brain working together. The basic SEO improvements can be achieved easily with the modules enlisted below, but you can win magical rankings if you use them logically & wisely. Understanding your keyword competition & enhancing your content is the basic key to success and ofcourse the modules: Pathauto Automatically create search enging friendly readable URLS from tokens. A token is a piece of data from content, say the author’s username, or the content’s title. For example mysite.com/an-article, rather than mysite.com/node/114 for every node you make. NodeWords Amazingly useful drupal module that allows you to create custom meta tags and descriptions for your nodes, which gives you the ability to target specific keywords and phrases. Page Title Enables you to set an alternative title for the <title></title> tags and for the <h1></h1> tags on a node. Global Redirect Manage content duplication, 301 redirects, and URL validation with this small, but powerful module. Taxonomy manager Make large additions, or changes to taxonomy very easy. This module provides a powerful interface for managing taxonomies. A vocabulary gets displayed in a dynamic tree view, where parent terms can be expanded to list their nested child terms or can be collapsed. robotstxt A robots.txt file is vital for ensuring that search engine spiders don’t index the unwanted areas of your site. This Drupal module gives you the ability to manage your robots.txt file through the CMS admin. xmlsitemap An XML Sitemap lets the search engines index your website content. This module helps in generating and maintaining a complete sitemap for your website and gives you control over exactly which parts of the site you want to be included in the index. It even gives you the ability to automatically submit your sitemap to Google, Yahoo!, Ask.com and Windows Live every time you update a node or at specific interval. Node Import This module allows you to import a set of nodes from a Comma Seperated Values (CSV) or Tab Seperated Values (TSV) text file. Makes it easy to import hundreds-thousands of csv rows and you get to tie up these rows to CCK fields (or locations), and it can file it under the right taxonomy hierarchy. This is Super life saver module.

    Read the article

  • From the Tips Box: Pre-installation Prep Work Makes Service Pack Upgrades Smoother

    - by Jason Fitzpatrick
    Last month Microsoft rolled out Windows 7 Service Pack 1 and, like many SP releases, quite a few people are hanging back to see what happens. If you want to update but still error on the side of caution, reader Ron Troy  offers a step-by-step guide. Ron’s cautious approach does an excellent job minimizing the number of issues that could crop up in a Service Pack upgrade by doing a thorough job updating your driver sets and clearing out old junk before you roll out the update. Read on to see how he does it: Just wanted to pass on a suggestion for people worried about installing Service Packs.  I came up with a ‘method’ a couple years back that seems to work well. Run Windows / Microsoft Update to get all updates EXCEPT the Service Pack. Use Secunia PSI to find any other updates you need. Use CCleaner or the Windows disk cleanup tools to get rid of all the old garbage out there.  Make sure that you include old system updates. Obviously, back up anything you really care about.  An image backup can be real nice to have if things go wrong. Download the correct SP version from Microsoft.com; do not use Windows / Microsoft Update to get it.  Make sure you have the 64 bit version if that’s what you have installed on your PC. Make sure that EVERYTHING that affects the OS is up to date.  That includes all sorts of drivers, starting with video and audio.  And if you have an Intel chipset, use the Intel Driver Utility to update those drivers.  It’s very quick and easy.  For the video and audio drivers, some can be updated by Intel, some by utilities on the vendor web sites, and some you just have to figure out yourself.  But don’t be lazy here; old drivers and Windows Service Packs are a poor mix. If you have 3rd party software, check to see if they have any updates for you.  They might not say that they are for the Service Pack but you cut your risk of things not working if you do this. Shut off the Antivirus software (especially if 3rd party). Reboot, hitting F8 to get the SafeMode menu.  Choose SafeMode with Networking. Log into the Administrator account to ensure that you have the right to install the SP. Run the SP.  It won’t be very fancy this way.  Maybe 45 minutes later it will reboot and then finish configuring itself, finally letting you log in. Total installation time on most of my PC’s was about 1 hour but that followed hours of preparation on each. On a separate note, I recently got on the Nvidia web site and their utility told me I had a new driver available for my GeForce 8600M GS.  This laptop had come with Vista, now has Win 7 SP1.  I had a big surprise from this driver update; the Windows Experience Score on the graphics side went way up.  Kudo’s to Nvidia for doing a driver update that actually helps day to day usage.  And unlike ATI’s updates (which I need for my AGP based system), this update was fairly quick and very easy.  Also, Nvidia drivers have never, as I can recall, given me BSOD’s, many of which I’ve gotten from ATI (TDR errors).How to Enable Google Chrome’s Secret Gold IconHTG Explains: What’s the Difference Between the Windows 7 HomeGroups and XP-style Networking?Internet Explorer 9 Released: Here’s What You Need To Know

    Read the article

  • How SQL Server 2014 impacts Red Gate’s SQL Compare

    - by Michelle Taylor
    SQL Compare 10.7 successfully connects to SQL Server 2014, but it doesn’t yet cover the SQL Server 2014 features which would require us to make major changes to SQL Compare to support. In this post I’m going to talk about the SQL Server 2014 features we’ve already begun supporting, and which ones we’re working on for the next release of SQL Compare (v11). From SQL Compare’s perspective, the new memory-optimized table functionality (some might know it as ‘Hekaton’) has been the most important change. It can’t be described as its own object type, but the new functionality is split across two existing object types (three if you count indexes), as it also comes with native stored procedures and inline indexes. Along with connectivity support, the SQL Compare team has already implemented the first part of the puzzle – inline specification of indexes. These are essential for memory-optimized tables because it’s not possible to alter the memory optimized table’s structure, and so indexes can’t be added after the fact without dropping the table. Books Online  shows this in more detail in the table_index and column_index clauses of http://msdn.microsoft.com/en-us/library/ms174979(v=sql.120).aspx. SQL Compare 10.7 currently supports reading the new inline index specification from script folders and source control repositories, and will write out inline indexes where it’s necessary to do so (i.e. in UDDTs or when attempting to write projects compatible with the SSDT database project format). However, memory-optimized tables themselves are not yet supported in 10.7. The team is actively working on making them available in the v11 release with full support later in the year, and in a beta version before that. Fortunately, SQL Compare already has some ways of handling tables that have to be dropped and created rather than altered, which are being adapted to handle this new kind of table. Because it’s one of the largest new database engine features, there’s an equally large Books Online section on memory-optimized tables, but for us the most important parts of the documentation are the normal table features that are changed or unsupported and the new syntax found in the T-SQL reference pages. We are treating SQL Compare’s support of Natively Compiled Stored Procedures as a separate unit of work, which will be available in a subsequent beta and also feed into the v11 release. This new type of stored procedure is designed to work with memory-optimized tables to maintain the performance improvements gained by them – but you can still also access memory-optimized tables from normal stored procedures and ad-hoc queries. To us, they’re essentially a limited-syntax stored procedure with a few extra options in the create statement, embodied in the updated CREATE PROCEDURE documentation and with the detailed limitations. They should be easier to handle than memory-optimized tables simply because the handling of stored procedures is less sensitive to dropping the object than the handling of tables. However, both share an incompatibility with DDL triggers and Event Notifications which mean we’ll need to temporarily disable these during the specific deployment operations that involve them – don’t worry, we’ll supply a warning if this is the case so that you can check your auditing arrangements can handle the situation. There are also a handful of other improvements in SQL Server 2014 which affect SQL Compare and SQL Data Compare that are not connected to memory optimized tables. The largest of these are the improvements to columnstore indexes, with the capability to create clustered columnstore indexes and update columnstore tables through them – for more detail, take a look at the new syntax reference. There’s also a new index option for better compression of columnstores (COLUMNSTORE_ARCHIVE) and a new statistics option for incremental per-partition statistics, plus the 90 compatibility level is being retired. We’re planning to finish up these small clean-up features last, and be ready to release SQL Compare 11 with full SQL 2014 support early in Q3 this year. For a more thorough overview of what’s new in SQL Server 2014, Books Online’s What’s New section is a good place to start (although almost all the changes in this version are in the Database Engine).

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >