Search Results

Search found 1938 results on 78 pages for 'bush man'.

Page 71/78 | < Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >

  • Productivity vs Security [closed]

    - by nerijus
    Really do not know is this right place to ask such a questions. But it is about programming in a different light. So, currently contracting with company witch pretends to be big corporation. Everyone is so important that all small issues like developers are ignored. Give you a sample: company VPN is configured so that if you have VPN then HTTP traffic is banned. Bearing this in mind can you imagine my workflow: Morning. Ok time to get latest source. Ups, no VPN. Let’s connect. Click-click. 3 sec. wait time. Ok getting source. Do I have emails? Ups. VPN is on, can’t check my emails. Need to wait for source to come up. Finally here it is! Ok Click-click VPN is gone. What is in my email. Someone reported a bug. Good, let’s track it down. It is in TFS already. Oh, dam, I need VPN. Click-click. Ok, there is description. Yea, I have seen this issue in stachoverflow.com. Let’s go there. Ups, no internet. Click-click. No internet. What? IPconfig… DHCP server kicked me out. Dam. Renew ip. 1..2..3. Ok internet is back. Google: site: stachoverflow.com 3 min. I have solution. Great I love stackoverflow.com. Don’t want to remember days where there was no stackoveflow.com. Ok. Copy paste this like to studio. Dam, studio is stalled, can’t reach files on TFS. Click-click. VPN is back. Get source out, paste my code. Grand. Let’s see what other comments about an issue in stackoverflow.com tells. Hmm.. There is a link. Click. Dammit! No internet. Click-click. No internet. DHCP kicked me out. Dammit. Now it is even worse: this happens 3-4 times a day. After certain amount of VPN connections open\closed my internet goes down solid. Only way to get internet back is reboot. All my browser tabs/SQL windows/studio will be gone. This happened just now when I am typing this. Back to issue I am solving right now: I am getting frustrated - I do not care about better solution for this issue. Let’s do it somehow and forget. This Click-click barrier between internet and TFS kills me… Sounds familiar? You could say there are VPN settings to change. No! This is company laptop, not allowed to do changes. I am very very lucky to have admin privileges on my machine. Most of developers don’t. So just learned to live with this frustration. It takes away 40-60 minutes daily. Tried to email company support, admins. They are too important ant too busy with something that just ignored my little man’s problem. Politely ignored. Question is: Is this normal in corporate world? (Have been in States, Canada, Germany. Never seen this.)

    Read the article

  • OpenSSL Versions in Solaris

    - by darrenm
    Those of you have have installed Solaris 11 or have read some of the blogs by my colleagues will have noticed Solaris 11 includes OpenSSL 1.0.0, this is a different version to what we have in Solaris 10.  I hope the following explains why that is and how it fits with the expectations on binary compatibility between Solaris releases. Solaris 10 was the first release where we included OpenSSL libraries and headers (part of it was actually statically linked into the SSH client/server in Solaris 9).  At time we were building and releasing Solaris 10 the current train of OpenSSL was 0.9.7.  The OpenSSL libraries at that time were known to not always be completely API and ABI (binary) compatible between releases (some times even in the lettered patch releases) though mostly if you stuck with the documented high level APIs you would be fine.   For this reason OpenSSL was classified as a 'Volatile' interface and in Solaris 10 Volatile interfaces were not part of the default library search path which is why the OpenSSL libraries live in /usr/sfw/lib on Solaris 10.  Okay, but what does Volatile mean ? Quoting from the attributes(5) man page description of Volatile (which was called External in older taxonomy): Volatile interfaces can change at any time and for any reason. The Volatile interface stability level allows Sun pro- ducts to quickly track a fluid, rapidly evolving specif- ication. In many cases, this is preferred to providing additional stability to the interface, as it may better meet the expectations of the consumer. The most common application of this taxonomy level is to interfaces that are controlled by a body other than Sun, but unlike specifications controlled by standards bodies or Free or Open Source Software (FOSS) communities which value interface compatibility, it can not be asserted that an incompatible change to the interface specifica- tion would be exceedingly rare. It may also be applied to FOSS controlled software where it is deemed more important to track the community with minimal latency than to provide stability to our customers. It also common to apply the Volatile classification level to interfaces in the process of being defined by trusted or widely accepted organization. These are generically referred to as draft standards. An "IETF Internet draft" is a well understood example of a specification under development. Volatile can also be applied to experimental interfaces. No assertion is made regarding either source or binary compatibility of Volatile interfaces between any two releases, including patches. Applications containing these interfaces might fail to function properly in any future release. Note that last paragraph!  OpenSSL is only one example of the many interfaces in Solaris that are classified as Volatile.  At the other end of the scale we have Committed (Stable in Solaris 10 terminology) interfaces, these include things like the POSIX APIs or Solaris specific APIs that we have no intention of changing in an incompatible way.  There are also Private interfaces and things we declare as Not-an-Interface (eg command output not intended for scripting against only to be read by humans). Even if we had declared OpenSSL as a Committed/Stable interface in Solaris 10 there are allowed exceptions, again quoting from attributes(5): 4. An interface specification which isn't controlled by Sun has been changed incompatibly and the vast majority of interface consumers expect the newer interface. 5. Not making the incompatible change would be incomprehensible to our customers. In our opinion and that of our large and small customers keeping up with the OpenSSL community is important, and certainly both of the above cases apply. Our policy for dealing with OpenSSL on Solaris 10 was to stay at 0.9.7 and add fixes for security vulnerabilities (the version string includes the CVE numbers of fixed vulnerabilities relevant to that release train).  The last release of OpenSSL 0.9.7 delivered by the upstream community was more than 4 years ago in Feb 2007. Now lets roll forward to just before the release of Solaris 11 Express in 2010. By that point in time the current OpenSSL release was 0.9.8 with the 1.0.0 release known to be coming soon.  Two significant changes to OpenSSL were made between Solaris 10 and Solaris 11 Express.  First in Solaris 11 Express (and Solaris 11) we removed the requirement that Volatile libraries be placed in /usr/sfw/lib, that means OpenSSL is now in /usr/lib, secondly we upgraded it to the then current version stream of OpenSSL (0.9.8) as was expected by our customers. In between Solaris 11 Express in 2010 and the release of Solaris 11 in 2011 the OpenSSL community released version 1.0.0.  This was a huge milestone for a long standing and highly respected open source project.  It would have been highly negligent of Solaris not to include OpenSSL 1.0.0e in the Solaris 11 release. It is the latest best supported and best performing version.     In fact Solaris 11 isn't 'just' OpenSSL 1.0.0 but we have added our SPARC T4 engine and the AES-NI engine to support the on chip crypto acceleration. This gives us 4.3x better AES performance than OpenSSL 0.9.8 running on AIX on an IBM POWER7. We are now working with the OpenSSL community to determine how best to integrate the SPARC T4 changes into the mainline OpenSSL.  The OpenSSL 'pkcs11' engine we delivered in Solaris 10 to support the CA-6000 card and the SPARC T1/T2/T3 hardware is still included in Solaris 11. When OpenSSL 1.0.1 and 1.1.0 come out we will asses what is best for Solaris customers. It might be upgrade or it might be parallel delivery of more than one version stream.  At this time Solaris 11 still classifies OpenSSL as a Volatile interface, it is our hope that we will be able at some point in a future release to give it a higher interface stability level. Happy crypting! and thank-you OpenSSL community for all the work you have done that helps Solaris.

    Read the article

  • Configure IPv6 on your Linux system (Ubuntu)

    After the presentation on IPv6 at the first event of the Emtel Knowledge Series and some recent discussion on social media networks with other geeks and Linux interested IT people here in Mauritius, I thought that I should give it a try (finally) and tweak my local network infrastructure. Honestly, I have been to busy with contractual project work and it never really occurred to me to set up IPv6 in my LAN. Well, the following paragraphs are going to shed some light on those aspects of modern computer and network technology. This is the first article in a series on IPv6 configuration: Configure IPv6 on your Linux system DHCPv6: Provide IPv6 information in your local network Enabling DNS for IPv6 infrastructure Accessing your web server via IPv6 Piece of advice: This is based on my findings on the internet while reading other people's helpful articles and going through a couple of man-pages on my local system. Let's embrace IPv6 The basic configuration on Linux is actually very simple as the kernel, operating system, and user-space programs support that protocol natively. If your system is ready to go for IP (aka: IPv4), then you are good to go for anything else. At least, I didn't have to install any additional packages on my system(s). We are going to assign a static IPv6 address to the system. Hence, we have to modify the definition of interfaces and check whether we have an inet6 entry specified. Open your favourite text editor and check the following entries (it should be at least similar to this): $ sudo nano /etc/network/interfaces auto eth0# IPv4 configurationiface eth0 inet static  address 192.168.1.2  network 192.168.1.0  netmask 255.255.255.0  broadcast 192.168.1.255# IPv6 configurationiface eth0 inet6 static  pre-up modprobe ipv6  address 2001:db8:bad:a55::2  netmask 64 Of course, you might have to adjust your interface device (eth0) or you might be interested to have multiple directives for additional devices (eth1, eth2, etc.). The auto instruction takes care that your device is enabled and configured during the booting phase. The use of the pre-up directive depends on your kernel configuration but in most scenarios this might be an optional line. Anyways, it doesn't hurt to have it enabled after all - just to be on the safe side. Next, either restart your network subsystem like so: $ sudo service networking restart Or you might prefer to do it manually with identical parameters, like so: $ sudo ifconfig eth0 inet6 add 2001:db8:bad:a55::2/64 In case that you're logged in remotely into your PC (ie. via ssh), it is highly advised to opt for the second choice and add the device manually. You can check your configuration afterwards with one of the following commands (depends on whether it is installed): $ sudo ifconfig eth0eth0      Link encap:Ethernet  HWaddr 00:21:5a:50:d7:94            inet addr:192.168.160.2  Bcast:192.168.160.255  Mask:255.255.255.0          inet6 addr: fe80::221:5aff:fe50:d794/64 Scope:Link          inet6 addr: 2001:db8:bad:a55::2/64 Scope:Global          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1 $ sudo ip -6 address show eth03: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qlen 1000    inet6 2001:db8:bad:a55::2/64 scope global        valid_lft forever preferred_lft forever    inet6 fe80::221:5aff:fe50:d794/64 scope link        valid_lft forever preferred_lft forever In both cases, it confirms that our network device has been assigned a valid IPv6 address. That's it in general for your setup on one system. But of course, you might be interested to enable more services for IPv6, especially if you're already running a couple of them in your IP network. More details are available on the official Ubuntu Wiki. Continue to configure your network to provide IPv6 address information automatically in your local infrastructure.

    Read the article

  • How to Customize Fonts and Colors for Gnome Panels in Ubuntu Linux

    - by The Geek
    Earlier this week we showed you how to make the Gnome Panels totally transparent, but you really need some customized fonts and colors to make the effect work better. Here’s how to do it. This article is the first part of a multi-part series on how to customize the Ubuntu desktop, written by How-To Geek reader and ubergeek, Omar Hafiz. Changing the Gnome Colors the Easy Way You’ll first need to install Gnome Color Chooser which is available in the default repositories (the package name is gnome-color-chooser). Then go to System > Preferences > Gnome Color Chooser to launch the program. When you see all these tabs you immediately know that Gnome Color Chooser does not only change the font color of the panel, but also the color of the fonts all over Ubuntu, desktop icons, and many other things as well. Now switch to the panel tab, here you can control every thing about your panels. You can change font, font color, background and background color of the panels and start menus. Tick the “Normal” option and choose the color you want for the panel font. If you want you can change the hover color of the buttons on the panel by too. A little below the color option is the font options, this includes the font, font size, and the X and Y positioning of the font. The first two options are pretty straight forward, they change the typeface and the size. The X-Padding and Y-Padding may confuse you but they are interesting, they may give a nice look for your panels by increasing the space between items on your panel like this: X-Padding:   Y-Padding:   The bottom half of the window controls the look of your start menus which is the Applications, Places, and Systems menus. You can customize them just the way you did with the panel.   Alright, this was the easy way to change the font of your panels. Changing the Gnome Theme Colors the Command-Line Way The other hard (not so hard really) way will be changing the configuration files that tell your panel how it should look like. In your Home Folder, press Ctrl+H to show the hidden files, now find the file “.gtkrc-2.0”, open it and insert this line in it. If there are any other lines in the file leave them intact. include “/home/<username>/.gnome2/panel-fontrc” Don’t forget to replace the <user_name> with you user account name. When done close and save the file. Now navigate the folder “.gnome2” from your Home Folder and create a new file and name it “panel-fontrc”. Open the file you just created with a text editor and paste the following in it: style “my_color”{fg[NORMAL] = “#FF0000”}widget “*PanelWidget*” style “my_color”widget “*PanelApplet*” style “my_color” This text will make the font red. If you want other colors you’ll need to replace the Hex value/HTML Notation (in this case #FF0000) with the value of the color you want. To get the hex value you can use GIMP, Gcolor2 witch is available in the default repositories or you can right-click on your panel > Properties > Background tab then click to choose the color you want and copy the Hex value. Don’t change any other thing in the text. When done, save and close. Now press Alt+F2 and enter “killall gnome-panel” to force it to restart or you can log out and login again. Most of you will prefer the first way of changing the font and color for it’s ease of applying and because it gives you much more options but, some may not have the ability/will to download and install a new program on their machine or have reasons of their own for not to using it, that’s why we provided the two way. Latest Features How-To Geek ETC How to Enable User-Specific Wireless Networks in Windows 7 How to Use Google Chrome as Your Default PDF Reader (the Easy Way) How To Remove People and Objects From Photographs In Photoshop Ask How-To Geek: How Can I Monitor My Bandwidth Usage? Internet Explorer 9 RC Now Available: Here’s the Most Interesting New Stuff Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware The Splendiferous Array of Culinary Tools [Infographic] Add a Real-Time Earth Wallpaper App to Ubuntu with xplanetFX The Citroen GT – An Awesome Video Game Car Brought to Life [Video] Final Man vs. Machine Round of Jeopardy Unfolds; Watson Dominates Give Chromium-Based Browser Desktop Notifications a Native System Look in Ubuntu Chrome Time Track Is a Simple Task Time Tracker

    Read the article

  • The Oracle Graduate Experience...A Graduates Perspective by Angelie Tierney

    - by david.talamelli
    [Note: Angelie has just recently joined Oracle in Australia in our 2011 Graduate Program. Last week I shared my thoughts on our 2011 Graduate Program, this week Angelie took some time to share her thoughts of our Graduate Program. The notes below are Angelie's overview from her experience with us starting with our first contact last year - David Talamelli] How does the 1 year program work? It consists of 3 weeks of training, followed by 2 rotations in 2 different Lines of Business (LoB's). The first rotation goes for 4 months, while your 2nd rotation goes for 7, when you are placed into your final LoB for the program. The interview process: After sorting through the many advertised graduate jobs, submitting so many resumes and studying at the same time, it can all be pretty stressful. Then there is the interview process. David called me on a Sunday afternoon and I spoke to him for about 30 minutes in a mini sort of phone interview. I was worried that working at Oracle would require extensive technical experience, but David stressed that even the less technical, and more business-minded person could, and did, work at Oracle. I was then asked if I would like to attend a group interview in the next weeks, to which I said of course! The first interview was a day long, consisting of a brief introduction, a group interview where we worked on a business plan with a group of other potential graduates and were marked by 3 Oracle employees, on our ability to work together and presentation. After lunch, we then had a short individual interview each, and that was the end of the first round. I received a call a few weeks later, and was asked to come into a second interview, at which I also jumped at the opportunity. This was an interview based purely on your individual abilities and would help to determine which Line of Business you would go to, should you land a graduate position. So how did I cope throughout the interview stages? I believe the best tool to prepare for the interview, was to research Oracle and its culture and to see if I thought I could fit into that. I personally found out about Oracle, its partners as well as competitors and along the way, even found out about their part (or Larry Ellison's specifically) in the Iron Man 2 movie. Armed with some Oracle information and lots of enthusiasm, I approached the Oracle Graduate Interview process. Why did I apply for an Oracle graduate position? I studied a Bachelor of Business/Bachelor of Science in IT, and wanted to be able to use both my degrees, while have the ability to work internationally in the future. Coming straight from university, I wasn't sure exactly what I wanted to do in terms of my career. With the program, you are rotated across various lines of business, to not only expose you to different parts of the business, but to also help you to figure out what you want to achieve out of your career. As a result, I thought Oracle was the perfect fit. So what can an Oracle ANZ Graduate expect? First things first, you can expect to line up for your visitor pass. Really. Next you enter a room full of unknown faces, graduates just like you, and then you realise you're in this with 18 other people, going through the same thing as you. 3 weeks later you leave with many memories, colleagues you can call your friends, and a video of your presentation. Vanessa, the Graduate Manager, will also take lots of photos and keep you (well) fed. Well that's not all you leave with, you are also equipped with a wealth of knowledge and contacts within Oracle, both that will help you throughout your career there. What training is involved? We started our Oracle experience with 3 weeks of training, consisting of employee orientation, extensive product training, presentations on the various lines of business (LoB's), followed by sales and presentation training. While there was potential for an information overload, maybe even death by Powerpoint, we were able to have access to the presentations for future reference, which was very helpful. This period also allowed us to start networking, not only with the graduates, but with the managers who presented to us, as well as through the monthly chinwag, HR celebrations and even with the sharing of tea facilities. We also had a team bonding day when we recorded a "commercial" within groups, and learned how to play an Irish drum. Overall, the training period helped us to learn about Oracle, as well as ourselves, and to prepare us for our transition into our rotations. Where to now? I'm now into my 2nd week of my first graduate rotation. It has been exciting to finally get out into the work environment and utilise that knowledge we gained from training. My manager has been a great mentor, extremely knowledgeable, and it has been good being able to participate in meetings, conference calls and make a contribution towards the business. And while we aren't necessarily working directly with the other graduates, they are still reachable via email, Pidgin and lunch and they are important as a resource and support, after all, they are going through a similar experience to you. While it is only the beginning, there is a lot more to learn and a lot more to experience along the way, especially because, as we learned during training, at Oracle, the only constant is change.

    Read the article

  • SQL SERVER – Not Possible – Delete From Multiple Table – Update Multiple Table in Single Statement

    - by pinaldave
    There are two questions which I get every single day multiple times. In my gmail, I have created standard canned reply for them. Let us see the questions here. I want to delete from multiple table in a single statement how will I do it? I want to update multiple table in a single statement how will I do it? The answer is – No, You cannot and you should not. SQL Server does not support deleting or updating from two tables in a single update. If you want to delete or update two different tables – you may want to write two different delete or update statements for it. This method has many issues – from the consistency of the data to SQL syntax. Now here is the real reason for this blog post – yesterday I was asked this question again and I replied my canned answer saying it is not possible and it should not be any way implemented that day. In the response to my reply I was pointed out to my own blog post where user suggested that I had previously mentioned this is possible and with demo example. Let us go over my conversation – you may find it interesting. Let us call the user DJ. DJ: Pinal, can we delete multiple table in a single statement or with single delete statement? Pinal: No, you cannot and you should not. DJ: Oh okey, if that is the case, why do you suggest to do that? Pinal: (baffled) I am not suggesting that. I am rather suggesting that it is not possible and it should not be possible. DJ: Hmm… but in that case why did you blog about it earlier? Pinal: (What?) No, I did not. I am pretty confident. DJ: Well, I am confident as well. You did. Pinal: In that case, it is my word against your word. Isn’t it? DJ: I have proof. Do you want to see it that you suggest it is possible? Pinal: Yes, I will be delighted too. (After 10 Minutes) DJ: Here are not one but two of your blog posts which talks about it - SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – Part 1 of 2 SQL SERVER – Curious Case of Disappearing Rows – ON UPDATE CASCADE and ON DELETE CASCADE – T-SQL Example – Part 2 of 2 Pinal: Oh! DJ: I know I was correct. Pinal: Well, oh man, I did not mean there what you mean here. DJ: I did not understand can you explain it further. Pinal: Here we go. The example in the other blog is the example of the cascading delete or cascading update. I think you may want to understand the concept of the foreign keys and cascading update/delete. The concept of cascading exists to maintain data integrity. If there primary keys get deleted the update or delete reflects on the foreign key table to maintain the key integrity and data consistency. SQL Server follows ANSI Entry SQL with regard to referential integrity between PrimaryKey and ForeignKey columns which requires the inserting, updating, and deleting of data in related tables to be restricted to values that preserve the integrity. This is all together different concept than deleting multiple values in a single statement. When I hear that someone wants to delete or update multiple table in a single statement what I assume is something very similar to following. DELETE/UPDATE Table 1 (cols) Table 2 (cols) VALUES … which is not valid statement/syntax as well it is not ASNI standards as well. I guess, after this discussion with DJ, I realize I need to do a blog post so I can add the link to this blog post in my canned answer. Well, it was a fun conversation with DJ and I hope it the message is very clear now. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Build 2012, some thoughts..

    - by Dennis Vroegop
    I think you probably read my rant about the logistics at Build 2012, as posted here, so I am not going into that anymore. Instead, let’s look at the content. (BTW If you did read that post and want some more info then read Nia Angelina’s post about Build. I have nothing to add to that.) As usual, there were good speakers and some speakers who could benefit from some speaker training. I find it hard to understand why Microsoft allows certain people on stage, people who speak English with such strong accents it’s hard for people, especially from abroad, to understand. Some basic training might be useful for some of them. However, it is nice to see that most speakers are project managers, program managers or even devs on the teams that build the stuff they talk about: there was a lot of knowledge on stage! And that means when you ask questions you get very relevant information. I realize I am not the average audience member here, I am regular speaker myself so I tend to look for other things when I am in a room than most audience members so my opinion might differ from others. All in all the knowledge of the speakers was above average but the presentation skills were most of the times below what I would describe as adequate. But let us look at the contents. Since the official name of the conference is Build Windows 2012 it is not surprising most of the talks were focused on building Windows 8 apps. Next to that, there was a lot of focus on Azure and of course Windows Phone 8 that launched the day before Build started. Most sessions dealt with C# and JavaScript although I did see a tendency to use C++ more. Touch. Well, that was the focus on a lot of sessions, that goes without saying. Microsoft is really betting on Touch these days and being a Touch oriented developer I can only applaud this. The term NUI is getting a bit outdated but the principles behind it certainly aren’t. The sessions did cover quite a lot on how to make your applications easy to use and easy to understand. However, not all is touch nowadays; still the majority of people use keyboard and mouse to interact with their machines (or, as I do, use keyboard, mouse AND touch at the same time). Microsoft understands this and has spend some serious thoughts on this as well. It was all about making your apps run everywhere on all sorts of devices and in all sorts of scenarios. I have seen a couple of sessions focusing on the portable class library and on sharing code between Windows 8 and Windows Phone 8. You get the feeling Microsoft is enabling us devs to write software that will be ubiquitous. They want your stuff to be all over the place and they do anything they can to help. To achieve that goal they provide us with brilliant SDK’s, great tooling, a very, very good backend in the form of Windows Azure (I was particularly impressed by the Mobility part of Azure) and some fantastic hardware. And speaking of hardware: the partners such as Acer, Lenovo and Dell are making hardware that puts Apple to a shame nowadays. To illustrate: in Bellevue (very close to Redmond where Microsoft HQ is) they have the Microsoft Store located very close to the Apple Store, so it’s easy to compare devices. And I have to say: the Microsoft offerings are much, much more appealing that what the Cupertino guys have to offer. That was very visible by the number of people visiting the stores: even on the day that Apple launched the iPad Mini there were more people in the Microsoft store than in the Apple store. So, the future looks like it’s going to be fun. Great hardware (did I mention the Nokia Lumia 920? No? It’s brilliant), great software (Windows 8 is in a league of its own), the best dev tools (Visual Studio 2012 is still the champion here) and a fantastic backend (Azure.. need I say more?). It’s up to us devs to fill up the stores with applications that matches this. To summarize: it is great to be a Windows developer. PS. Did I mention Surface RT? Man….. People were drooling all over it wherever I went. It is fantastic :-) Technorati Tags: Build,Windows 8,Windows Phone,Lumia,Surface,Microsoft

    Read the article

  • The Linux powered LAN Gaming House

    - by sachinghalot
    LAN parties offer the enjoyment of head to head gaming in a real-life social environment. In general, they are experiencing decline thanks to the convenience of Internet gaming, but Kenton Varda is a man who takes his LAN gaming very seriously. His LAN gaming house is a fascinating project, and best of all, Linux plays a part in making it all work.Varda has done his own write ups (short, long), so I'm only going to give an overview here. The setup is a large house with 12 gaming stations and a single server computer.The client computers themselves are rack mounted in a server room, and they are linked to the gaming stations on the floor above via extension cables (HDMI for video and audio and USB for mouse and keyboard). Each client computer, built into a 3U rack mount case, is a well specced gaming rig in its own right, sporting an Intel Core i5 processor, 4GB of RAM and an Nvidia GeForce 560 along with a 60GB SSD drive.Originally, the client computers ran Ubuntu Linux rather than Windows and the games executed under WINE, but Varda had to abandon this scheme. As he explains on his site:"Amazingly, a majority of games worked fine, although many had minor bugs (e.g. flickering mouse cursor, minor rendering artifacts, etc.). Some games, however, did not work, or had bad bugs that made them annoying to play."Subsequently, the gaming computers have been moved onto a more conventional gaming choice, Windows 7. It's a shame that WINE couldn't be made to work, but I can sympathize as it's rare to find modern games that work perfectly and at full native speed. Another problem with WINE is that it tends to suffer from regressions, which is hardly surprising when considering the difficulty of constantly improving the emulation of the Windows API. Varda points out that he preferred working with Linux clients as they were easier to modify and came with less licensing baggage.Linux still runs the server and all of the tools used are open source software. The hardware here is a Intel Xeon E3-1230 with 4GB of RAM. The storage hanging off this machine is a bit more complex than the clients. In addition to the 60GB SSD, it also has 2x1TB drives and a 240GB SDD.When the clients were running Linux, they booted over PXE using a toolchain that will be familiar to anyone who has setup Linux network booting. DHCP pointed the clients to the server which then supplied PXELINUX using TFTP. When booted, file access was accomplished through network block device (NBD). This is a very easy to use system that allows you to serve the contents of a file as a block device over the network. The client computer runs a user mode device driver and the device can be mounted within the file system using the mount command.One snag with offering file access via NBD is that it's difficult to impose any security restrictions on different areas of the file system as the server only sees a single file. The advantage is perfomance as the client operating system simply sees a block device, and besides, these security issues aren't relevant in this setup.Unfortunately, Windows 7 can't use NBD, so, Varda had to switch to iSCSI (which works in both server and client mode under Linux). His network cards are not compliant with this standard when doing a netboot, but fortunately, gPXE came to the rescue, and he boostraps it over PXE. gPXE is also available as an ISO image and is worth knowing about if you encounter an awkward machine that can't manage a network boot. It can also optionally boot from a HTTP server rather than the more traditional TFTP server.According to Varda, booting all 12 machines over the Gigabit Ethernet network is surprisingly fast, and once booted, the machines don't seem noticeably slower than if they were using local storage. Once loaded, most games attempt to load in as much data as possible, filling the RAM, and the the disk and network bandwidth required is small. It's worth noting that these are aspects of this project that might differ from some other thin client scenarios.At time of writing, it doesn't seem as though the local storage of the client machines is being utilized. Instead, the clients boot into Windows from an image on the server that contains the operating system and the games themselves. It uses the copy on write feature of LVM so that any writes from a client are added to a differencing image allocated to that client. As the administrator, Varda can log into the Linux server and authorize changes to the master image for updates etc.SummaryOverall, Varda estimates the total cost of the project at about $40,000, and of course, he needed a property that offered a large physical space in order to house the computers and the gaming workstations. Obviously, this project has stark differences to most thin client projects. The balance between storage, network usage, GPU power and security would not be typical of an office installation, for example. The only letdown is that WINE proved to be insufficiently compatible to run a wide variety of modern games, but that is, perhaps, asking too much of it, and hats off to Varda for trying to make it work.

    Read the article

  • Updating a database connection password using a script

    - by Tim Dexter
    An interesting customer requirement that I thought was worthy of sharing today. Thanks to James for the requirement and Bryan for the proposed solution and me for testing the solution and proving it works :0) A customers implementation of Sarbanes Oxley requires them to change all database account passwords every 90 days. This is scripted leveraging shell scripts today for most of their environments. But how can they manage the BI Publisher connections? Now, the customer is running 11g and therefore using weblogic on the middle tier, which is the first clue to Bryans proposed solution. To paraphrase and embellish Bryan's solution a little; why not use a JNDI connection from BIP to the database. Then employ the web logic scripting engine to make updates to the JNDI as needed? BIP is completely uninvolved and with a little 'timing' users will be completely unaware of the password updates i.e. change the password when reports are not being executed. Perfect! James immediately tracked down the WLST script that could be used here, http://middlewaremagic.com/weblogic/?p=4261 (thanks Ravish) Now it was just a case of testing the theory. Some steps: Create the JNDI connection in WLS Create the JNDI connection in BI Publisher pointing to the WLS connection Build new data models using or re-point data sources to use the JNDI connection. Create the WLST script to update the WLS JNDI password as needed. Test! Some details. Creating the JNDI connection in web logic is pretty straightforward. Log into hte console and look for Data Sources under the Services section of the home page and click it Click New >> Generic Datasource Give the connection a name. For the JNDI name, prefix it with 'jdbc/' so I have 'jdbc/localdb' - this name is important you'll need it on the BIP side. Select your db type - this will influence the drivers and information needed on the next page. Being a company man, Im using an Oracle db. Click Next Select the driver of choice, theres lots I know, you can read about them I just chose 'Oracle's Driver (Thin) for Instance connections; Versions 9.0.1 and later' Click Next >> Next Fill out the db name (SID), server, port, username to connect and password >> Next Test the config to ensure you can connect. >> Next Now you need to deploy the connection to your BI server, select it and click Next. You're done with the JNDI config. Creating the JNDI connection on the Publisher side is covered here. Just remember to the connection name you created in WLS e.g. 'jdbc/localdb' Not gonna tell you how to do this, go read the user guide :0) Suffice to say, it works. This requires a little reading around the subject to understand the scripting engine and how to execute scripts. Nicely covered here. However a bit of googlin' and I found an even easier way of running the script. ${ServerHome}/common/bin/wlst.sh updatepwd.py Where updatepwd.py is my script file, it can be in another directory. As part of the wlst.sh script your environment is set up for you so its very simple to execute. The nitty gritty: Need to take Ravish's script above and create a file with a .py extension. Its going to need some modification, as he explains on the web page, to make it work in your environment. I played around with it for a while but kept running into errors. The script as is, tries to loop through all of your connections and modify the user and passwords for each. Not quite what we are looking for. Remember our requirement is to just update the password for a given connection. I also found another issue with the script. WLS 10.x does not allow updates to passwords using clear type ie un-encrypted text while the server is in production mode. Its a bit much to set it back to developer mode bounce it, change the passwords and then bounce and then change back to production and bounce again. After lots of messing about I finally came up with the following: ############################################################################# # # Update password for JNDI connections # ############################################################################# print("*** Trying to Connect.... *****") connect('weblogic','welcome1','t3://localhost:7001') print("*** Connected *****") edit() startEdit() print ("*** Encrypt the password ***") en = encrypt('hr') print "Encrypted pwd: ", en print ("*** Changing pwd for LocalDB ***") dsName = 'LocalDB' print 'Changing Password for DataSource ', dsName cd('/JDBCSystemResources/'+dsName+'/JDBCResource/'+dsName+'/JDBCDriverParams/'+dsName) set('PasswordEncrypted',en) save() activate() Its pretty simple and you can expand on it to loop through the data sources and change each as needed. I have hardcoded the password into the file but you can pass it as a parameter as needed using the properties file method. Im not going to get into the detail of that here but its covered with an example here. Couple of points to note: 1. The change to the password requires a server bounce to get the changes picked up. You can add that to the shell script you will use to call the script above. 2. The script above needs to be run from the MW_HOME\user_projects\domains\bifoundation_domain directory to get the encryption libraries set correctly. My command to run the whole script was: d:\oracle\bi_mw\wlserver_10.3\common\bin\wlst.cmd updatepwd.py - where wlst.cmd is the scripting command line and updatepwd.py was my update password script above. I have not quite spoon fed everything you need to make it a robust script but at least you know you can do it and you can work out the rest I think :0)

    Read the article

  • Silverlight Reporting Application Part 3.5 - Prism Background and WCF RIA [Series Intermission]

    Taking a step back before I dive into the details and full-on coding fun, I wanted to once again respond to a comment on my last post to clear up some things in regards to how I'm setting up my project and some of the choices I've made. Aka, thanks Ben. :) Prism Project Setup For starters, I'm not the ideal use case for a Prism application. In most cases where you've got a one-man team, Prism can be overkill as it is more intended for large teams who are geographically dispersed or in applications that have a larger scale than my Recruiting application in which you'll greatly benefit from modularity, delayed loading of xaps, etc. What Prism offers, though, is a manner for handling UI, commands, and events with the idea that, through a modular approach in which no parts really need to know about one another, I can update this application bit by bit as hiring needs change or requirements differ between offices without having to worry that changing something in the Jobs module will break something in, say, the Scheduling module. All that being said, here's a look at how our project breakdown for Recruit (MVVM/Prism implementation) looks: This could be a little misleading though, as each of those modules is actually another project in the overall Recruit solution. As far as what the projects actually are, that looks a bit like this: Recruiting Solution Recruit (Shell up there) - Main Silverlight Application .Web - Default .Web application to host the Silverlight app Infrastructure - Silverlight Class Library Project Modules - Silverlight Class Library Projects Infrastructure &Modules The Infrastructure project is probably something you'll see to some degree in any composite application. In this application, it is going to contain custom commands (you'll see the joy of these in a post or two down the road), events, helper classes, and any custom classes I need to share between different modules. Think of this as a handy little crossroad between any parts of your application. Modules on the other hand are the bread and butter of this application. Besides the shell, which holds the UI skeleton, and the infrastructure, which holds all those shared goodies, the modules are self-contained bundles of functionality to handle different concerns. In my scenario, I need a way to look up and edit Jobs, Applicants, and Schedule interviews, a Notification module to handle telling the user when different things are happening (i.e., loading from database), and a Menu to control interaction and moving between different views. All modules are going to follow the following pattern: The module class will inherit from IModule and handle initialization and loading the correct view into the correct region, whereas the Views and ViewModels folders will contain paired Silverlight user controls and ViewModel class backings. WCF RIA Services Since we've got all the projects in a single solution, we did not have to go the route of creating a WCR RIA Services Class Library. Every module has it's WCF RIA link back to the main .Web project, so the single Linq-2-SQL (yes, I said Linq-2-SQL, but I'll soon be switching to OpenAccess due to the new visual designer) context I'm using there works nicely with the scope of my project. If I were going for completely separating this project out and doing different, dynamically loaded elements, I'd probably go for the separate class library. Hope that clears that up. In the future though, I will be using that in a project that I've got in the "when I've got enough time to work on this" pipeline, so we'll get into that eventually- and hopefully when WCF RIA is in full release! Why Not use Silverlight Navigation/Business Template? The short answer- I'm a creature of habit, and having used Silverlight for a few years now, I'm used to doing lots of things manually. :) Plus, starting with a blank slate of a project I'm able to set up things exactly as I want them to be. In this case, rather than the navigation frame we would see in one of the templates, the MainRegion/ContentControl is working as our main navigation window. In many cases I will use theSilverlight navigation template to start things off, however in this case I did not need those features so I opted out of using that. Next time when I actually hit post #4, we're going to get into the modules and starting to get functionality into this application. Next week is also release week for the Q1 2010 release, so be sure to check out our annualWebinar Week (I might be biased, but Wednesday is my favorite out of the group). Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Specs, Form and Function – What am I Missing?

    - by Barry Shulam
    0 0 1 628 3586 08041 29 8 4206 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Friday October 26th the Microsoft Surface RT arrived at the office.  I was summoned to my boss’s office for the grand unpacking.  If I had planned ahead I could have used my iPhone 4 to film the event and post it on YouTube however the desire to hold the device and turn it ON was more inviting than becoming a proxy reviewer for Engadget’s website.  1980 was the first time we had a personal computer in our house.  It was a  Kaypro computer. It weighed 29 pounds more than any persons lap could hold.  Then the term “portable computer” meant you could remove it from the building and take it else where.  Today I am typing on this entry on a Macbook Air which weighs 2.38 pounds. This morning Amazons front page main title is: “Much More for Much Less” I was born at the right time to start with the CPM operating system on the Kaypro thru the DOS, Windows, Linux, Mac OSX and mobile phone operating systems and languages.  If you are not aware Technology is moving at a rapid pace.  The New iPad (those who are keeping score – iPad4) is replacing a 7 month old machine the New iPad (iPad 3) I have used and owned many technology devices in my life.  The main point that most of the reader who are in the USA overlook is the fact that we are in the USA.  The devices we purchase have a great digital garden to support them.  The Kaypro computer had a 7-inch screen.  It was a TV tube with two colors – Black and Green.  You could see the 80-column screen flicker with characters – have you every played Pac-Man emulated on the screen with the ABC characters. Traveling across the world you will find that not all apps on your device will function as they did back home because they are not offered outside of your country of origin. I think the main question a buyer of technology should be asking is Function.  The greatest Specs with out function limit you.  The most beautiful form with out function is the same as a crystal vase on your shelf – not a good cereal bowl in the morning. Microsoft Surface RT, Amazon Kindle Fire and Apple iPad all great devices in their respective customers hands. My advice for those looking to purchase on this year:  If the device is your only technology device you buy what you WANT and LIKE. Consider this parallel universe if its not your only device?  Ever go shopping for clothing, shoes, and accessories with your wife, girlfriend, sister or mother?  If you listen carefully you will hear the little voices coming out of there heads saying:  “This goes well with that and I can use it also with that outfit” ”Do you think this clashes with that?”  “Ohh I love how that combination looks on you”.  Portable devices such as tablets and computers can offer a whole lot more when they are combined with the digital echo system you have at home and the manufacturer offers online. Pros of each Device: Microsoft Surface RT: There is a new functionality named SmartGlass which will let you share the content off your tablet to your XBOX 360.  Microsoft office is loaded on the tablet.  You can have more than one user profile on the tablet if you share it with others.   Amazon Kindle or Kindle HD: If you are an Amazon consumer with an annual Amazon Prime service you can consume videos and read books off the Amazon site.  Its the cheapest device.  Its a step up from the kindle reader in many ways.   Apple Ipad or Ipad mini: Over 270 Thousand applications.  Airplay permits you the ability to share to your TV screen. If you are a cord cutter (a person who gets their entertainment content over the web or air vs Cable Providers) the Airplay or Smart glass are a huge bonus.  iPad mini or not: The mini will fit in a purse where the larger one will not.  Its lighter which makes it nice to hold for prolonged periods.  It has an option for LTE wireless which non of the other sub 9 inch tables offer.  The screen is non retina which means the applications are smaller.  Speaking with individuals who are above 50 in age that wear glasses they retina does not make a difference for them however they prefer the larger iPad over the new mini.   Happy Shopping this Channuka Season.   The Kosher Coder.   Follow me on twitter @KosherCoder

    Read the article

  • Computer Networks UNISA - Chap 8 &ndash; Wireless Networking

    - by MarkPearl
    After reading this section you should be able to Explain how nodes exchange wireless signals Identify potential obstacles to successful transmission and their repercussions, such as interference and reflection Understand WLAN architecture Specify the characteristics of popular WLAN transmission methods including 802.11 a/b/g/n Install and configure wireless access points and their clients Describe wireless MAN and WAN technologies, including 802.16 and satellite communications The Wireless Spectrum All wireless signals are carried through the air by electromagnetic waves. The wireless spectrum is a continuum of the electromagnetic waves used for data and voice communication. The wireless spectrum falls between 9KHZ and 300 GHZ. Characteristics of Wireless Transmission Antennas Each type of wireless service requires an antenna specifically designed for that service. The service’s specification determine the antenna’s power output, frequency, and radiation pattern. A directional antenna issues wireless signals along a single direction. An omnidirectional antenna issues and receives wireless signals with equal strength and clarity in all directions The geographical area that an antenna or wireless system can reach is known as its range Signal Propagation LOS (line of sight) uses the least amount of energy and results in the reception of the clearest possible signal. When there is an obstacle in the way, the signal may… pass through the object or be obsrobed by the object or may be subject to reflection, diffraction or scattering. Reflection – waves encounter an object and bounces off it. Diffraction – signal splits into secondary waves when it encounters an obstruction Scattering – is the diffusion or the reflection in multiple different directions of a signal Signal Degradation Fading occurs as a signal hits various objects. Because of fading, the strength of the signal that reaches the receiver is lower than the transmitted signal strength. The further a signal moves from its source, the weaker it gets (this is called attenuation) Signals are also affected by noise – the electromagnetic interference) Interference can distort and weaken a wireless signal in the same way that noise distorts and weakens a wired signal. Frequency Ranges Older wireless devices used the 2.4 GHZ band to send and receive signals. This had 11 communication channels that are unlicensed. Newer wireless devices can also use the 5 GHZ band which has 24 unlicensed bands Narrowband, Broadband, and Spread Spectrum Signals Narrowband – a transmitter concentrates the signal energy at a single frequency or in a very small range of frequencies Broadband – uses a relatively wide band of the wireless spectrum and offers higher throughputs than narrowband technologies The use of multiple frequencies to transmit a signal is known as spread-spectrum technology. In other words a signal never stays continuously within one frequency range during its transmission. One specific implementation of spread spectrum is FHSS (frequency hoping spread spectrum). Another type is known as DSS (direct sequence spread spectrum) Fixed vs. Mobile Each type of wireless communication falls into one of two categories Fixed – the location of the transmitted and receiver do not move (results in energy saved because weaker signal strength is possible with directional antennas) Mobile – the location can change WLAN (Wireless LAN) Architecture There are two main types of arrangements Adhoc – data is sent directly between devices – good for small local devices Infrastructure mode – a wireless access point is placed centrally, that all devices connect with 802.11 WLANs The most popular wireless standards used on contemporary LANs are those developed by IEEE’s 802.11 committee. Over the years several distinct standards related to wireless networking have been released. Four of the best known standards are also referred to as Wi-Fi. They are…. 802.11b 802.11a 802.11g 802.11n These four standards share many characteristics. i.e. All 4 use half duplex signalling Follow the same access method Access Method 802.11 standards specify the use of CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) to access a shared medium. Using CSMA/CA before a station begins to send data on an 802.11 network, it checks for existing wireless transmissions. If the source node detects no transmission activity on the network, it waits a brief period of time and then sends its transmission. If the source does detect activity, it waits a brief period of time before checking again. The destination node receives the transmission and, after verifying its accuracy, issues an acknowledgement (ACT) packet to the source. If the source receives the ACK it assumes the transmission was successful, – if it does not receive an ACK it assumes the transmission failed and sends it again. Association Two types of scanning… Active – station transmits a special frame, known as a prove, on all available channels within its frequency range. When an access point finds the probe frame, it issues a probe response. Passive – wireless station listens on all channels within its frequency range for a special signal, known as a beacon frame, issued from an access point – the beacon frame contains information necessary to connect to the point. Re-association occurs when a mobile user moves out of one access point’s range and into the range of another. Frames Read page 378 – 381 about frames and specific 802.11 protocols Bluetooth Networks Sony Ericson originally invented the Bluetooth technology in the early 1990s. In 1998 other manufacturers joined Ericsson in the Special Interest Group (SIG) whose aim was to refine and standardize the technology. Bluetooth was designed to be used on small networks composed of personal communications devices. It has become popular wireless technology for communicating among cellular telephones, phone headsets, etc. Wireless WANs and Internet Access Refer to pages 396 – 402 of the textbook for details.

    Read the article

  • Private Cloud: Putting some method behind the madness

    - by Sudip Datta
    Finally, I decided to join the blogging community. And what could be a better time to start than the week after OpenWorld 2012. 50K+ attendees, demonstrations, speaker sessions and a whole lot of buzz on Oracle Cloud..It was raining clouds in this year's Openworld. I am not here to write about Oracle's cloud strategy in general, but on Enterprise Manager's cloud management capabilities. This year's Openworld was the first after we announced the 12c Cloud Control and we were happy to share the stage with quite a few early adopters. Stay tuned for videos from our customers and partners, I will post them as they get published. I met a number of platform administrators in Oracle-DBAs, Middleware Admins, SOA Admins...The cloud has affected them all, at least to the point where it beckoned more than just curiosity..Most IT infrastructure are already heavily virtualized (on VMWare and on others including Oracle VM), and some would claim they are already on “cloud” (at least their Sysadmins told them so). But none of them were confident of the benefits because their pain points continued to grow.. Isn't cloud supposed to ease those? Instead, they were chasing hundreds of databases running on hundreds of VMs, often with as much certainty propounded by Heisenberg. What happened to the age-old IT discipline around administration, compliance, configuration management? VMs are great for what they are. I personally think they have opened the doors to new approaches in which an application stack gets provisioned and updated. In fact, Enterprise Manager 12c is possibly the only tool out there that can provision full-fledged application as VM Assemblies. In this year's Openworld, customers talked on how they provisioned RAC and Siebel assemblies, which as the techies out there know, are not trivial (hearing provisioning time for Siebel down from weeks to hours was gratifying indeed). However, I do have an issue with a "one-size fits all" approach to cloud. In a week's span, I met several personas: Project owners requiring an EC2 like VM instance for their projects Admins needing the same for Sparc-Solaris. DBAs requiring dedicated databases for new projects APEX Developers needing just a ready-to-consume schema as a service Java Developers looking for a runtime platform QA engineers needing a fast clone of their production environment If you drill down further, you will end up peeling more layers of the details. For example, the requirements for Load testing and Functional testing are very different. For Load testing the test environment should ideally be the same as the production. You shouldn't run production on Exadata and load test on a VM; they will just not be good representations of one another. For Functional testing it does not possibly matter. DBAs seem to be at the worst affected of the lot. It seems they have been asked to choose between agile provisioning and  faster runtime performance. And in some cases, it is really a Hobson's choice, because their infrastructure provider made no distinction between the OLTP application and the Virtual desktop! Sad indeed. When one looks at the portfolio of services that we already offer (vanilla IaaS, VM Assembly based PaaS, DBaaS) or have announced (Java PaaS, Instant Cloning, Schema-aaS), one can possibly think that we are trying to be the "renaissance man" ! Well I would have possibly digested that had it not been for the various personas that I described above. Getting the use cases right is very important for an application such as cloud management. We iterate and iterate over these over and over again and re-validate them in CABs (Customer Advisory Boards). We consider over the major aspects of tenancy: service placement, resource isolation (can a tenant execute an expensive SQL and run away with all the resources), quota and security. We, in Engineering, keep reminding ourselves that we are dealing with enterprise clouds. We owe it to our customer base ! In the coming posts, I will drill down more into each of the services. In the meanwhile, here are some collateral and  demos for starters with EM 12c. http://www.oracle.com/technetwork/oem/cloud-mgmt/index.html Sudip Datta The views expressed here are my own and do not necessarily reflect the views of Oracle. Stay Connected: Twitter | Facebook | YouTube | Linkedin | Newsletter --

    Read the article

  • Social Engagement: One Size Doesn't Fit Anyone

    - by Mike Stiles
    The key to achieving meaningful social engagement is to know who you’re talking to, know what they like, and consistently deliver that kind of material to them. Every magazine for women knows this. When you read the article titles promoted on their covers, there’s no mistaking for whom that magazine is intended. And yet, confusion still reigns at many brands as to exactly whom they want to talk to, what those people want to hear, and what kind of content they should be creating for them. In most instances, the root problem is brands want to be all things to all people. Their target audience…the world! Good luck with that. It’s 2012, the age of aggregation and custom content delivery. To cope with the modern day barrage of information, people have constructed technological filters so that content they regard as being “for them” is mostly what gets through. Even if your brand is for men and women, young and old, you may want to consider social properties that divide men from women, and young from old. Yes, a man might find something in a women’s magazine that interests him. But that doesn’t mean he’s going to subscribe to it, or buy even one issue. In fact he’ll probably never see the article he’d otherwise be interested in, because in his mind, “This isn’t for me.” It wasn’t packaged for him. News Flash: men and women are different. So it’s a tall order to craft your Facebook Page or Twitter handle to simultaneously exude the motivators for both. The Harris Interactive study “2012 Connecting and Communicating Online: State of Social Media” sheds light on the differing social behaviors and drivers. -65% of women (vs. 59% of men) stay glued to social because they don’t want to miss anything. -25% of women check social when they wake up, before they check email. Only 18% of men check social before e-mail. -95% of women surveyed belong to Facebook vs. 86% of men. -67% of women log in to Facebook once a day or more vs. 54% of men. -Conventional wisdom is Pinterest is mostly a woman-thing, right? That may be true for viewing, but not true for sharing. Men are actually more likely to share on Pinterest than women, 23% to 10%. -The sharing divide extends to YouTube. 68% of women use it mainly for consumption, as opposed to 52% of men. -Women are as likely to have a Twitter account as men, but they’re much less likely to check it often. 54% of women check it once a week compared to 2/3 of men. Obviously, there are some takeaways from this depending on your target. Women don’t want to miss out on anything, so serialized content might be a good idea, right? Promotional posts that lead to a big payoff could keep them hooked. Posts for women might be better served first thing in the morning. If sharing is your goal, maybe male-targeted content is more likely to get those desired shares. And maybe Twitter is a better place to aim your male-targeted content than Facebook. Some grocery stores started experimenting with male-only aisles. The results have been impressive. Why? Because while it’s true men were finding those same items in the store just fine before, now something has been created just for them. They have a place in the store where they belong. Each brand’s strategy and targets are going to differ. The point is…know who you’re talking to, know how they behave, know what they like, and deliver content using any number of social relationship management targeting tools that meets their expectations. If, however, you’re committed to a one-size-fits-all, “our content is for everybody” strategy (or even worse, a “this is what we want to put out and we expect everybody to love it” strategy), your content will miss the mark for more often than it hits. @mikestilesPhoto via stock.schng

    Read the article

  • Documentation Changes in Solaris 11.1

    - by alanc
    One of the first places you can see Solaris 11.1 changes are in the docs, which have now been posted in the Solaris 11.1 Library on docs.oracle.com. I spent a good deal of time reviewing documentation for this release, and thought some would be interesting to blog about, but didn't review all the changes (not by a long shot), and am not going to cover all the changes here, so there's plenty left for you to discover on your own. Just comparing the Solaris 11.1 Library list of docs against the Solaris 11 list will show a lot of reorganization and refactoring of the doc set, especially in the system administration guides. Hopefully the new break down will make it easier to get straight to the sections you need when a task is at hand. Packaging System Unfortunately, the excellent in-depth guide for how to build packages for the new Image Packaging System (IPS) in Solaris 11 wasn't done in time to make the initial Solaris 11 doc set. An interim version was published shortly after release, in PDF form on the OTN IPS page. For Solaris 11.1 it was included in the doc set, as Packaging and Delivering Software With the Image Packaging System in Oracle Solaris 11.1, so should be easier to find, and easier to share links to specific pages the HTML version. Beyond just how to build a package, it includes details on how Solaris is packaged, and how package updates work, which may be useful to all system administrators who deal with Solaris 11 upgrades & installations. The Adding and Updating Oracle Solaris 11.1 Software Packages was also extended, including new sections on Relaxing Version Constraints Specified by Incorporations and Locking Packages to a Specified Version that may be of interest to those who want to keep the Solaris 11 versions of certain packages when they upgrade, such as the couple of packages that had functionality removed by an (unusual for an update release) End of Feature process in the 11.1 release. Also added in this release is a document containing the lists of all the packages in each of the major package groups in Solaris 11.1 (solaris-desktop, solaris-large-server, and solaris-small-server). While you can simply get the contents of those groups from the package repository, either via the web interface or the pkg command line, the documentation puts them in handy tables for easier side-by-side comparison, or viewing the lists before you've installed the system to pick which one you want to initially install. X Window System We've not had good X11 coverage in the online Solaris docs in a while, mostly relying on the man pages, and upstream X.Org docs. In this release, we've integrated some X coverage into the Solaris 11.1 Desktop Adminstrator's Guide, including sections on installing fonts for fontconfig or legacy X11 clients, X server configuration, and setting up remote access via X11 or VNC. Of course we continue to work on improving the docs, including a lot of contributions to the upstream docs all OS'es share (more about that another time). Security One of the things Oracle likes to do for its products is to publish security guides for administrators & developers to know how to build systems that meet their security needs. For Solaris, we started this with Solaris 11, providing a guide for sysadmins to find where the security relevant configuration options were documented. The Solaris 11.1 Security Guidelines extend this to cover new security features, such as Address Space Layout Randomization (ASLR) and Read-Only Zones, as well as adding additional guidelines for existing features, such as how to limit the size of tmpfs filesystems, to avoid users driving the system into swap thrashing situations. For developers, the corresponding document is the Developer's Guide to Oracle Solaris 11 Security, which has been the source for years for documentation of security-relevant Solaris API's such as PAM, GSS-API, and the Solaris Cryptographic Framework. For Solaris 11.1, a new appendix was added to start providing Secure Coding Guidelines for Developers, leveraging the CERT Secure Coding Standards and OWASP guidelines to provide the base recommendations for common programming languages and their standard API's. Solaris specific secure programming guidance was added via links to other documentation in the product doc set. In parallel, we updated the Solaris C Libary Functions security considerations list with details of Solaris 11 enhancements such as FD_CLOEXEC flags, additional *at() functions, and new stdio functions such as asprintf() and getline(). A number of code examples throughout the Solaris 11.1 doc set were updated to follow these recommendations, changing unbounded strcpy() calls to strlcpy(), sprintf() to snprintf(), etc. so that developers following our examples start out with safer code. The Writing Device Drivers guide even had the appendix updated to list which of these utility functions, like snprintf() and strlcpy(), are now available via the Kernel DDI. Little Things Of course all the big new features got documented, and some major efforts were put into refactoring and renovation, but there were also a lot of smaller things that got fixed as well in the nearly a year between the Solaris 11 and 11.1 doc releases - again too many to list here, but a random sampling of the ones I know about & found interesting or useful: The Privileges section of the DTrace Guide now gives users a pointer to find out how to set up DTrace privileges for non-global zones and what limitations are in place there. A new section on Recommended iSCSI Configuration Practices was added to the iSCSI configuration section when it moved into the SAN Configuration and Multipathing administration guide. The Managing System Power Services section contains an expanded explanation of the various tunables for power management in Solaris 11.1. The sample dcmd sources in /usr/demo/mdb were updated to include ::help output, so that developers like myself who follow the examples don't forget to include it (until a helpful code reviewer pointed it out while reviewing the mdb module changes for Xorg 1.12). The README file in that directory was updated to show the correct paths for installing both kernel & userspace modules, including the 64-bit variants.

    Read the article

  • Rules for Naming

    - by PointsToShare
    © 2011 By: Dov Trietsch. All rights reserved Naming Documents (or is it “Document, Naming”?) Tis but thy name that is my enemy; Thou art thyself, though not a Montague. What's Montague? It is nor hand, nor foot, Nor arm, nor face, nor any other part Belonging to a man. O, be some other name! What's in a name? That which we call a rose By any other name would smell as sweet; So Romeo would, were he not Romeo call'd, Retain that dear perfection which he owes Without that title. Romeo, doff thy name And for that name which is no part of thee Take all myself.  Shakespeare – Romeo and Juliet Act II, Scene 2 We normally only use the bold portion of the famous Shakespearean quote above, but it is really out of context. As the play unfolds, we learn that a name is all too powerful. Indeed it is because of their names that the doomed lovers die. There might be life and death in a name (BTW, when I wrote this monogram, I was in Hatfield, PA. Remember the Hatfields and the McCoys?) This is a bit extreme, but in the field of Knowledge Management (KM) names are of the utmost importance as well. When I write an article about managing SharePoint sites, how should I name it? “Managing a site” or “Site, managing”? Nine times out of ten I’d opt for the latter. Almost everything we do is “Managing” so to make life easier for a person looking for meaningful content, we title our articles starting with the differentiator rather than the common factor. As a rule of thumb, we start the name with the noun rather than the verb. It is not what we do that is the primary key; it is what we do it to. So, answer this – is it a “rule of thumb” or a “thumb rule?” This is tough. A lot of what we do when naming is a judgment call. Both thumb and rule are nouns, albeit concrete and abstract (more about this later), but to most people “thumb rule” is meaningless while “rule of thumb” is an idiom. The difference between knowledge and information is that knowledge is meaningful information placed in context. Thus I elect the “rule of thumb”. It is the more meaningful title. Abstract and Concrete are relative terms. Many nouns (and verbs) that are abstract to a commoner, are concrete to a practitioner of one profession or another and may even have different concrete meanings in different professional jargons. Think about “running”. To an executive it means running a business, to a marathoner its meaning is much more literal. Generally speaking, we store and disseminate knowledge within a practice more than we do it in general. Even dictionaries encyclopedias define terms as they apply to different audiences. The rule of thumb is to put the more concrete first, but within the audience’s jargon. Even the title of this monogram is a question. Do I name it “Naming Documents” or “Documents, Naming”? Well, my own rule of thumb (“Here he goes again!?”) states that the latter is better because it starts with a noun, but this is a document about naming more than it about documents. The rules of naming also apply to graphs and charts, excel spreadsheets, and so on. Thus, I vote for the former.  A better title could have been “Naming Objects” only the word “Object” is a bit too abstract. How about just “Naming” or “Naming, rules of”? You get the drift. One of the ways to resolve all of this is to store the documents in Knowledge-Bases, which may become the subjects of a future punditry. Knowledge bases use keywords to describe their content.  Use a Metadata store for the keywords to at least attempt some common grounds. Here is another general rule (rule of thumb?!!) – put at least the one keyword in the title. Use subtitles. Here is an example: Migrating documents – Screening, cleaning, and organizing our knowledge. The main keyword is “documents”, next is “migrating”, other keywords also appear in the subtitle. They are “screening”, “cleaning”, and “organizing”. Any questions? Send me an amply named document by email: [email protected]

    Read the article

  • Some Early Considerations

    - by Chris Massey
    Following on from my previous post, I want to say "thank you" to everyone who has got in touch and got involved – you are pioneers! An update on where we are right now: paper prototypes v1 To be more specific, we’ve picked two of the ideas that seem to have more pros than cons, turned them into Balsamiq mockups, and are getting them fleshed out with realistic content. We’ll initially make these available to the aforementioned pioneers (thank you again), roll in the feedback, and then open up to get more data on what works and what doesn’t. If you’ve got any questions about this (or what we’re working on right now), feel free to ask me in the comments below. I’ve had a few people express an interest in the process we’re going through, and I’m more than happy to share details more frequently as we go along – not least because you, dear reader, will help us stay on target and create something Good. To start with, here’s a quick flashback to bring you all up to speed. A Brief Retrospective As you may already know, we’re creating a new publishing asset specifically focused on providing great content for web developers. We don’t yet know exactly what this thing will look like, or exactly how it will work, but we know we want to create something that is useful different. For my part, I’m seriously excited at the prospect of building a genuinely digital publishing system (as opposed to what most publishing is these days, which is print-style publishing which just happens to be on the web). The main challenge at this point is working out our build-measure-assess loop to speed up our experimental turn-around, and that’ll get better as we run more trials. Of course, there are a few things we’ve been pondering at this early conceptual stage: Do we publishing about heterogeneous technology stacks from day 1, or do we start with ASP.NET (which we’re familiar with) & branch out later? There are challenges with either approach. What publishing "modes" are already being well-handled? For example, the likes of Pluralsight, TekPub, and Treehouse have pretty much nailed video training (debate about price, if you like), and unless we think we can do it faster / better / cheaper (unlikely, for the record), we should leave them to it. Where should we base whatever we create? Should we create a completely new asset under a new name, graft something onto Simple-Talk (like the labs), or just build something directly into Simple-Talk? It sounds trivial, but it does have at least some impact on infrastructure and what how we manage the different types of content we (will) have. Are there any obvious problems or niches that we think could address really well, or should we just throw ideas out and see what readers respond to? What kind of users do we want to provide for? This actually deserves a little bit of unpacking… Why are you here? We currently divide readers into (broadly) the categories: Category 1: I know nothing about X, and I’d like to learn about it. Category 2: I know something about X, but I’d like to learn how to do something specific with it. Category 3: Ah man, I have a problem with X, and I need to fix it now. Now that I think about it, I might also include a 4th class of reader: Category 4: I’m looking for something interesting to engage my brain. These are clearly task-based categorizations, and depending on which task you’re performing when you arrive here, you’re going to need different types of content, or will have specific discovery needs. One of the questions that’s at the back of my mind whenever I consider a new idea is “How many of the categories will this satisfy?” As an example, typical video training is very well suited to categories 1, 2, and 4. StackOverflow is very well suited to category 3, and serves as a sign-posting system to the rest. Clearly it’s not necessary to satisfy every category need to be useful and popular, but being aware of what behavior readers might be exhibiting when they arrive will help us tune our ideas appropriately. < / Flashback > We don’t have clean answers to most of these considerations – they’re things we’re aware of, and each idea we look at is going to be best suited to a different mix of the options I’ve described. Our first experimental loop will be coming full circle in the next few days, so we should start to see how the different possibilities vary between ideas. Free to chime in with questions and suggestions about anything I’ve just brain-dumped, or at any stage as we go along. If you see anything that intrigued or enrages you, or just have an idea you’d like to share, I’d love to hear from you.

    Read the article

  • Computer science undergraduate project ideas

    - by Mehrdad Afshari
    Hopefully, I'm going to finish my undergraduate studies next semester and I'm thinking about the topic of my final project. And yes, I've read the questions with duplicate title. I'm asking this from a bit different viewpoint, so it's not an exact dupe. I've spent at least half of my life coding stuff in different languages and frameworks so I'm not looking at this project as a way to learn much about coding and preparing for real world apps or such. I've done lots of those already. But since I have to do it to complete my degree, I felt I should spend my time doing something useful instead of throwing the whole thing out. I'm planning to make it an open source project or a hosted Web app (depending on the type) if I can make a high quality thing out of it, so I decided to ask StackOverflow what could make a useful project. Situation I've plenty of freedom about the topic. They also require 30-40 pages of text describing the project. I have the following points in mind (the more satisfied, the better): Something useful for software development Something that benefits the community Having academic value is great Shouldn't take more than a month of development (I know I'm lazy). Shouldn't be related to advanced theoretical stuff (soft computing, fuzzy logic, neural networks, ...). I've been a business-oriented software developer. It should be software oriented. While I love hacking microcontrollers and other fun embedded electronic things, I'm not really good at soldering and things like that. I'm leaning toward a Web application (think StackOverflow, PasteBin, NerdDinner, things like those). Technology It's probably going to be done in .NET (C#, F#) and Windows platform. If I really like the project (cool low level hacking), I might actually slip to C/C++. But really, C# is what I'm efficient at. Ideas Programming language, parsing and compiler related stuff: Designing a domain specific programming language and compiler Templating language compiled to C# or IL Database tools and related code generation stuff Web related technologies: ASP.NET MVC View engine doing something cool (don't know what exactly...) Specific-purpose, small, fast ASP.NET-based Web framework Applications: Visual Studio plugin to integrate with Bazaar (it's too much work, I think). ASP.NET based, jQuery-powered issue tracker (and possibly, project lifecycle management as a whole - poor man's TFS) Others: Something related to GPGPU Looking forward for great ideas! Unfortunately, I can't help on a currently existing project. I need to start my own to prevent further problems (as it's an undergrad project, nevertheless).

    Read the article

  • Qt vs .NET - plz no n00bs who don't know wtf they're talking about [closed]

    - by Pirate for Profit
    Man in all these Qt vs. .NET discussions 90% these people don't know WTF they're talking about. Trying to get a real comparison chart going before we embark on a major fucking project. And yes I'm drunk, and yes I use cocaine. Event Handling In Qt the event handling system you just emit signals when something cool happens and then catch them in slots, for instance emit valueChanged(int percent, bool something); and void MyCatcherObj::valueChanged(int p, bool ok){} blocking them and disconnecting them when needed, doing it across threads... once you get the hang of it, it just seems a lot more natural and intuitive than the way the .NET event handling is set up (you know, object sender, CustomEventArgs e). And I'm not just talking about syntax, because in the end the .NET delegate crap is the bomb. I'm also talking about in more than just reflection (because, yes, .NET obviously has much stronger reflection capabilities). I'm talking about in the way the system feels to a human being. Qt wins hands down i m o. Basically, the footprints make more sense and you can visualize the project easier without the clunky event handling system. I wish I could it explain it better. The only thing is, I do love some of the ease of C# compared to C++ and .NET's assembly architecture. That is a big bonus for modular projects, which are a PITA to do in C++. Database Ease of Doing Crap Also what about datasets and database manipulations. I think .net wins here but I'm not sure. Threading/Conccurency How do you guys think of the threading? In .NET, all I've ever done is make like a list of master worker threads with locks. I like QConcurrentFramework, you don't worry about locks or anything, and with the ease of the signal slot system across threads it's nice to get notified about the progress of things. Memory Usage Also what do you think of the overall memory usage comparison. Is the .NET garbage collector pretty on the ball and quick compared to the instantaneous nature of native memory management? Or does it just let programs leak up a storm and lag the computer then clean it up when it's about to really lag? However, I am a n00b who doesn't know what I'm talking about, please school me on the subject.

    Read the article

  • Best programming aids for a quadriplegic programmer

    - by Peter Rowell
    Before you jump to conclusions, yes, this is programming related. It covers a situation that comes under the heading of, "There, but for the grace of God, go you or I." This is brand new territory for me so I'm asking for some serious help here. A young man, Honza Ripa, in a nearby town did the classic Dumb Thing two weeks after graduating from High School -- he dove into shallow water in the Russian River and had a C-4/C-5 break, sometimes called a Swimming Pool break. In a matter of seconds he went from an exceptional golfer and wrestler to a quadriplegic. (Read the story ... all of us should have been so lucky as to have a girlfriend like Brianna.) That was 10 months ago and he has regained only tiny amounts of control of his right index finger and a couple of other hand/foot motions, none of them fine-grained. His total control of his computer (currently running Win7, but we can change that as needed) is via voice command. Honza's not dumb. He had a 3.7 GPA with AP math and physics. The Problems: Since all of his input is via voice command, he is concerned that the predominance of special characters in programming will require vast amount of verbose commands. Does anyone know of any well done voice input system specifically designed for programmers? I'm thinking about something that might be modal--e.g. you say "Python input" and it goes into a macro mode for doing class definitions, etc. Given all of the RSI in programmer-land there's got to be something out there. What OS(es) does it run on? I am planning on teaching him Python, which is my preferred language for programming and teaching. Are there any applications / whatevers that are written in Python and would be a particularly good match for engaging him mentally while supporting his disability? One of his expressed interests is in stock investing, but that not might be a good starting point for a brand-new programmer. There are a lot of environments (Flash, JavaScript, etc) that are not particularly friendly to people with accessibility challenges. I vaguely remember (but cannot find) a research project that basically created an overlay system on top of a screen environment and then allowed macro command construction on top of the screen image. If we can get/train this system, we may be able to remove many hurdles to using the net. I am particularly interested in finding open source Python-based robotics and robotic prostheses projects so that he can simultaneously learn advanced programming concepts while learning to solve some of his own immediate problems. I've done a ton of googling on this, but I know there things I'm missing. I'm asking the SO community to step up to the plate here. I know this group has the answers, so let me hear them! Overwhelm me with the opportunities that any of us might have/need to still program after such a life-changing event.

    Read the article

  • Scheme Formatting Help

    - by Logan
    I've been working on a project for school that takes functions from a class file and turns them into object/classes. The assignment is all about object oriented programming in scheme. My problem however is that my code doesn't format right. The output it gives me whenever I give it a file to pass in wraps the methods of the class in a list, making it so that the class never really gets declared. I can't for the life of me figure out how to get the parenthesis wrapping the method list to remove. I would really appreciate any help. Below is the code and the class file. ;;;; PART1 --- A super-easy set of classes. Just models points and lines. Tests all of the ;; basics of class behavior without touching on anything particularly complex. (class pointInstance (parent:) (constructor_args:) (ivars: (myx 1) (myy 2)) (methods: (getx () myx) (gety () myy) (setx (x) (set! myx x)) (show () (begin (display "[") (display myx) (display ",") (display myy) (display "]"))) )) (require (lib "trace.ss")) ;; Continue reading until you hit the end of the file, all the while ;; building a list with the contents (define load-file (lambda (port) (let ((rec (read port))) (if (eof-object? rec) '() (cons rec (load-file port)))))) ;; Open a port based on a file name using open-input-file (define (load fname) (let ((fport (open-input-file fname))) (load-file fport))) ;(define lis (load "C:\\Users\\Logan\\Desktop\\simpletest.txt")) ;(define lis (load "C:\\Users\\Logan\\Desktop\\complextest.txt")) (define lis (load "C:\\Users\\Logan\\Desktop\\pointinstance.txt")) ;(display (cdaddr (cdddar lis))) (define makeMethodList (lambda (listToMake retList) ;(display listToMake) (cond [(null? listToMake) retList ;(display "The list passed in to parse was null") ] [else (makeMethodList (cdr listToMake) (append retList (list (getMethodLine listToMake)))) ] ) )) ;(trace makeMethodList) ;this works provided you just pass in the function line (define getMethodLine (lambda (functionList) `((eq? (car msg) ,(caar functionList)) ,(caddar functionList)))) (define load-classes (lambda paramList (cond [(null? paramList) (display "Your parameters are null, man.")] [(null? (car paramList))(display "Done creating class definitions.")] [(not (null? (car paramList))) (begin (let* ((className (cadaar paramList)) (classInstanceVars (cdaddr (cddaar paramList))) (classMethodList (cdr (cadddr (cddaar paramList)))) (desiredMethodList (makeMethodList classMethodList '())) ) ;(display "Classname: ") ;(display className) ;(newline)(newline) ;(display "Class Instance Vars: ") ;(display classInstanceVars) ;(newline)(newline) ;(display "Class Method List: ") ;(display classMethodList) ;(newline) ;(display "Desired Method List: ") ;(display desiredMethodList)) ;(newline)(newline) ;---------------------------------------------------- ;do not delete the below code!` `(define ,className (let ,classInstanceVars (lambda msg ;return the function list here (cond ,(makeMethodList classMethodList '()))) )) ;--------------------------------------------------- ))] ) )) (load-classes lis) ;(load-classes lis) ;(load-classes-helper lis) ;(load-classes "simpletest.txt") ;(load-classes "complextest.txt") ;method list ;(display (cdr (cadddr (cddaar <class>))))

    Read the article

  • Classic ASP vs. ASP.NET encryption options

    - by harrije
    I'm working on a web site where the new pages are ASP.NET and the legacy pages are Classic ASP. Being new to development in the Windows env, I've been studying the latest technology, i.e. .NET and I become like a deer in headlights when ever legacy issues come up regarding COM objects. Security on the website is an abomination, but I've easily encrypted the connectionStrings in the web.config file per http://www.4guysfromrolla.com/articles/021506-1.aspx based on DPAPI machine mode. I understand this approach is not the most secure, but it's better than nothing which is what it was for the ASP.NET pages. Now, I question how to do similar encryption for the connection strings used by the Classic ASP pages. A complicating factor is that the web sited is hosted where I do not have admin permissions or even command line access, just FTP. Moreover I want to avoid managing the key. My research has found: DPAPI with COM interop. Seems like this should already be available, but the only thing I could find discussing this is CyptoUtility (see http://msdn.microsoft.com/en-us/magazine/cc163884.aspx) which is not installed on the hosting server. There are plenty of other third party COM objects, e.g. Crypto from Dalun Software http://www.dalun.com, but these aren't on the hosted server either, and they look to me to require you to do some kind of key management. There is CAPICOM on the hosted server, but M$ has deprecated it and many report it is not the easiest to use. It is not clear to me whether I can avoid key management with CAPICOM similar to using DPAPI for ASP.NET. If anyone happens to know, please clue me in. I could write an web service in ASP.NET and have the classic ASP pages use it to get the decrypted connection strings and then store those in an application variable. I would not need to use SSL since I could use localhost and nothing would be sent over the internet. In the simpliest form I could implement what someone termed a poor man's version based on a simple XML stream, however, I really was looking to avoid any development since I find it hard to believe there is not a simple solution for Classic ASP like there is for ASP.NET. Maybe I'm missing some options... Recommendations are requested...

    Read the article

  • Using commit monitors as a form of code review

    - by Jeff Dege
    I'm working in a small company - four developers, working on a variety of projects. We've been looking at what we can do as cost-effective methods of process improvement, and an idea came up. Given what we do, we often have single developers working on parts of a system, independently of the other developers. This can have a number of negative affects: A developer might not be fully aware of the context in which a change is being implemented, and make the change in a way that will meet the current customer's needs, but will break functionality that other customers depend on. A developer might make a change that breaks the current architectural design, introducing a dependency that will cause problems in future development. Other developers might not be aware of how the system has changed, in areas that they have not worked on. We've talked about doing code reviews, as a way of dealing with these issues. But we've not had much success when we tried. It takes a lot of time to prepare a change for a code review, and it takes everybody out of production while the review is being performed. And the benefits of any review we've tried has been minimal. We're using Subversion (with TortioseSVN) as our VCS. I've been looking at the SubVersion CommitMonitor tool, and wondering whether it might work as a sort of poor-man's code review. It lists every commit made on the repository, allowing someone to see the changes that have been made, the log messages made for that change, the files that were included in the change, and the specific lines in each file that were changed. Rather than scheduling a meeting, trying to get everybody together to review every change, we could just have every developer review every other developer's commits, at whatever time was convenient. This would keep every developer abreast of what changes were being made elsewhere in the system, and would have every change reviewed for customer conflicts and design consistency, at a fairly low cost. If someone saw a problem with the code that was being checked in, he could discuss it with the developer who did the commit, or more likely, schedule a meeting to discuss how the new feature could be implemented in a way that would not impact other users or screw up the architecture. Anyone else doing anything like this, using commit monitors for such a purpose?

    Read the article

  • Executing procedural queries in perl

    - by KalpaG
    I have a query like this set @valid_total:=0; set @invalid_total:=0; select week as weekno, measured_week,project_id as project, role_category_id as role_category, valid_count,valid_tickets, (@valid_total := @valid_total + valid_count) as valid_total, invalid_count,invalid_tickets, (@invalid_total := @invalid_total + invalid_count) as invalid_total from metric_fault_bug_project where measured_week = yearweek(curdate()) and role_category_id = 1 and project_id = 11; it executes fine in heidi (MySQL client) but when it comes to perl, it gives me this error DBD::mysql::st execute failed: You have an error in your SQL syntax; check the m anual that corresponds to your MySQL server version for the right syntax to use near ':=0; set :=0; select week as weekno, measured_week,project_id as project' at line 1 at D:\Mx\scripts\test.pl line 35. Can't execute SQL statement: You have an error in your SQL syntax; check the man ual that corresponds to your MySQL server version for the right syntax to use ne ar ':=0; set :=0; select week as weekno, measured_week,project_id as project' at line 1 The problem seem to be in the set @valid_total := 0; line. I am fairly new to Perl. Can anyone help? this is the complete perl code #!/usr/bin/perl #use lib '/x01/home/kalpag/libs'; use DBI; use strict; use warnings; my $sid = 'issues'; my $user = 'root'; my $passwd = 'kalpa'; my $connection = "DBI:mysql:database=$sid;host=localhost"; my $dbhh = DBI->connect( $connection, $user, $passwd) || die "Database connection not made: $DBI::errstr"; my $sql_query = 'set @valid_total:=0; set @invalid_total:=0; select week as weekno, measured_week,project_id, role_category_id as role_category, valid_count,valid_tickets, (@valid_total := @valid_total + valid_count) as valid_total, invalid_count,invalid_tickets, (@invalid_total := @invalid_total + invalid_count) as invalid_total from metric_fault_bug_project where measured_week = yearweek(curdate()) and role_category_id = 1 and project_id = 11'; my $sth = $dbhh->prepare($sql_query) or die "Can't prepare SQL statement: $DBI::errstr\n"; $sth->execute() or die "Can't execute SQL statement: $DBI::errstr\n"; while ( my @memory = $sth->fetchrow() ) { print "@memory \n"; }

    Read the article

  • Trouble cross-referencing two XML child nodes in AS3

    - by Dwayne
    I am building a mini language translator out of XML nodes and Actionscript 3. I have my XML ready. It contains a series of nodes with two children in them. Here is a sample: <translations> <entry> <english>man</english> <cockney>geeza</cockney> </entry> <entry> <english>woman</english> <cockney>lily</cockney> </entry> </translations> The AS3 part consist of one input box named "textfield_txt" where the English word will be typed in. An output text field for the translation called "cockney_txt". Finally, one button to execute the translation called "generate_mc". The idea is to have actionscript look through the XML for key English words after the user types it in the "textfield", cross-freferences the children then returns the Cockney translation as a value. The trouble is, when I test it I get no response or error messages- it's completely silent. I'm not sure what I'm doing wrong. At present, I have setup a conditional statement to tell me whether the function works or not. The result is, no it's not! Here's the code below. I hope someone can help. Cheers! generate_mc.buttonMode=true; var English:String; var myXML:XML; var myLoader:URLLoader = new URLLoader(); myLoader.load(new URLRequest("Language.xml")); myLoader.addEventListener(Event.COMPLETE, processXML); function processXML(e:Event):void { myXML = new XML(e.target.data); } generate_mc.addEventListener(MouseEvent.CLICK, onClick); function onClick(event:MouseEvent) { English = textfield.text; cockney_txt.text = myXML.translations.entry.cockney; if(textfield.text.toLowerCase() == myXML.translations.entry.english.toLowerCase){ //return myXML.translations.entry.cockney; trace("success"); }else{ trace("try again!"); // ***I get this as a result } }

    Read the article

< Previous Page | 67 68 69 70 71 72 73 74 75 76 77 78  | Next Page >