Search Results

Search found 1771 results on 71 pages for 'luke san antonio'.

Page 49/71 | < Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >

  • Executable Resumes

    - by Liam McLennan
    Over the past twelve months I have been thinking a lot about executable specifications. Long considered the holy grail of agile software development, executable specifications means expressing a program’s functionality in a way that is both readable by the customer and computer verifiable in an automatic, repeatable way. With the current generation of BDD and ATDD tools executable specifications seem finally within the reach of a significant percentage of the development community. Lately, and partly as a result of my craftsmanship tour, I have decided that soon I am going to have to get a job (gasp!). As Dave Hoover describes in Apprenticeship Patters, “you … have mentors and kindred spirits that you meet with periodically, [but] when it comes to developing software, you work alone.” The time may have come where the only way for me to feel satisfied and enriched by my work is to seek out a work environment where I can work with people smarter and more knowledgeable than myself. Having been on both sides of the interview desk many times I know how difficult and unreliable the process can be. Therefore, I am proposing the idea of executable resumes. As a journeyman programmer looking for a fruitful work environment I plan to write an application that demonstrates my understanding of the state of the art. Potential employers can download, view and execute my executable resume and judge wether my aesthetic sensibility matches their own. The concept of the executable resume is based upon the following assertion: A line of code answers a thousand interview questions Asking people about their experiences and skills is not a direct way of assessing their value to your organisation. Often it simple assesses their ability to mislead an interviewer. An executable resume demonstrates: The highest quality code that the person is able to produce. That the person is sufficiently motivated to produce something of value in their own time. That the person loves their craft. The idea of publishing a program to demonstrate a developer’s skills comes from Rob Conery, who suggested that each developer should build their own blog engine since it is the public representation of their level of mastery. Rob said: Luke had to build his own lightsaber – geeks should have to build their own blogs. And that should be their resume. In honour of Rob’s inspiration I plan to build a blog engine as my executable resume. While it is true that the world does not need another blog engine it is as good a project as any, it is a well understood domain, and I have not found an existing blog engine that I like. Executable resumes fit well with the software craftsmanship metaphor. It is not difficult to imagine that under the guild system master craftsmen may have accepted journeymen based on the quality of the work they had produced in the past. We now understand that when it comes to the functionality of an application that code is the final arbiter. Why not apply the same rule to hiring?

    Read the article

  • SQL Developer Debugging, Watches, Smart Data, & Data

    - by thatjeffsmith
    After presenting the SQL Developer PL/SQL debugger for about an hour yesterday at KScope12 in San Antonio, my boss came up and asked, “Now, would you really want to know what the Smart Data panel does?” Apparently I had ‘made up’ my own story about what that panel’s intent is based on my experience with it. Not good Jeff, not good. It was a very small point of my presentation, but I probably should have read the docs. The Smart Data tab displays information about variables, using your Debugger: Smart Data preferences. You can also specify these preferences by right-clicking in the Smart Data window and selecting Preferences. Debugger Smart Data Preferences, control number of variables to display The Smart Data panel auto-inspects the last X accessed variables. So if you have a program with 26 variables, instead of showing you all 26, it will just show you the last two variables that were referenced in your program. If you were to click on the ‘Data’ debug panel, you’ll see EVERYTHING. And if you only want to see a very specific set of values, then you should use Watches. The Smart Data Panel As I step through the code, the variables being tracked change as they are referenced. Only the most recent ones display. This is controlled by the ‘Maximum Locations to Remember’ preference. Step through the code, see the latest variables accessed The Data Panel All variables are displayed. Might be information overload on large PL/SQL programs where you have many dozens or even hundreds of variables to track. Shows everything all the time Watches Watches are added manually and only show what you ask for. Data on Demand – add a watch to track a specific variable Remember, you can interact with your data If you want to do more than just watch, you can mouse-right on a data element, and change the value of the variable as the program is running. This is one of the primary benefits to debugging over using DBMS_OUTPUT to track what’s happening in your program. Change the values while the program is running to test your ‘What if?’ scenarios

    Read the article

  • Too Few Women in IT!

    - by Yolande
    Last year, only 1% of attendees at Devoxx were women . This year, Devoxx addressed the issue in a panel entitled "Why We Should Target Women." On the panel were Kim Ross, Régina ten Bruggencate, Trisha Gee, Antonio Goncalves and Claude Falguiére. The moderator was Martijn Verburg. The discussion focused on how to attract women to programming and how to get current women programmers to be more active in the community. The panelists agreed that the IT field should not just attract more women but also men of different ethnic backgrounds. The lack of women in programming is in part a cultural issue that differs from region to region. In developed countries, very few women work as programmers whereas in Brazil and India a lot of women pursue careers in IT.  Women in developed countries perceive the field as isolating and very few young women graduate in computer science.  This perception of isolation was based in reality decades ago, but that is no longer the case today. Main ideas discussed by the panel: - Parents should encourage their daughters to play with Lego and learn programming - More organizations should target girls in high schools and young women in university to expose them to programming.  Duchess organization is planning on being more involved with young girl events and mentoring. - Women tend to be more self-critical about their skills and are intimidated by high skill requirements in job advertisements. Companies should change job advertisements to get more women to interviews. - Panelists don't recommend affirmative action because women feel favored and lose credibility. They want to be judged for their skills. - Panelists recommend acting the same way when dealing with either female or male co-workers and managers - Women need mentors (men or women) to learn to become speakers at conferences and to promote themselves better - Men should be sensitive to the fact that women are alone at work to respond to men teasing. The balance of power at work is different from a social setting. - Men also experience discrimination on the job. It is more difficult for men to take time off when their children are sick, for example. Equal valuing of parental obligations could result in equal pay for women. See also: Trisha Gee Blog - http://mechanitis.blogspot.com/ Duchess Organization - http://www.jduchess.org/

    Read the article

  • WebCenter Spaces 11g PS2 Template Customization

    - by javier.ductor(at)oracle.com
    Recently, we have been involved in a WebCenter Spaces customization project. Customer sent us a prototype website in HTML, and we had to transform Spaces to set the same look&feel as in the prototype.Protoype:First of all, we downloaded a Virtual Machine with WebCenter Spaces 11g PS2, same version as customer's. The next step was to download ExtendWebCenterSpaces application. This is a webcenter application that customizes several elements of WebCenter Spaces: templates, skins, landing page, etc. jDeveloper configuration is needed, we followed steps described in Extended Guide, this blog was helpful too. . After that, we deployed the application using WebLogic console, we created a new group space and assigned the ExtendedWebCenterSpaces template (portalCentricSiteTemplate) to it. The result was this:As you may see there is a big difference with the prototype, which meant a lot of work to do from our side.So we needed to create a new Spaces template with its skin.Using jDeveloper, we created a new template based on the default template. We removed all menus we did not need and inserted 'include'  tags for header, breadcrumb and footers. Each of these elements was defined in an isolated jspx file.In the beginning, we faced a problem: we had to add code from prototype (in plain HTML) to jspx templates (JSF language). We managed to workaround this issue using 'verbatim' tags with 'CDATA' surrounding HTML code in header, breadcrumb and footers.Once the template was modified, we added css styles to the default skin. So we had some styles from portalCentricSiteTemplate plus styles from customer's prototype.Then, we re-deployed the application, assigned this template to a new group space and checked the result. After testing, we usually found issues, so we had to do some modifications in the application, then it was necessary to re-deploy the application and restart Spaces server. Due to this fact, the testing process takes quite a bit of time.Added to the template and skin customization, we also customized the Landing Page using this application and Members task flow using another application: SpacesTaskflowCustomizationApplication. I will talk about this task flow customization in a future entry.After some issues and workarounds, eventually we managed to get this look&feel in Spaces:P.S. In this customization I was working with Francisco Vega and Jose Antonio Labrada, consultants from Oracle Malaga International Consulting Centre.

    Read the article

  • The Developer's Conference Florianópolis, Brazil

    - by Tori Wieldt
    by guest blogger Yara Senger With over 2900 developers in person and another 2000 online, The Developer's Conference (TDC) in Florianópolis, Brazil, reminds us that Java is BIG in Brazil. The conference included 20 different tracks, and Java was the most popular track. Java was also a big part of the talks in the IoT, Cloud and BigData tracks. Here's my overview (in Brazilian Portguese): Several JUGs were involved in TDC Florianópolis, serving as track leads, speakers and all-around heros, including SouJava SouJava Campinas GUJava Santa Catarina JUG Vale JUG Maringá Java Bahia GOJava (Goinia) JUG Rio do Sul RS Jug (Rio Grande do Sul) and I thank them for their support and commitment. It is a vibrant and fun community! We saw that the IoT space is maturing rapidly. There are already some related to embedded in the region.  Java Evangelist Bruno Borges and Marco Antonio Maciel gave a view popular talk "Java: Tweet for Beer!" They demonstrated how to make a beer tap controlled by Java and connected to the Internet, using a visual application JavaFX with Java SE 8, running on a Rasperry Pi. Of course, they had to test the application quite throughly.   We Brazilians are training the next generation of Java developers. TDC4Kids was as big success. We made a tour with the kids in all booths and almost everybody talked about Java. Java in government managment (Betha), Java on the 2048  (Oracle), Java on the popcorn machine and Java training (Globalcode & V.Office) and of course: Java & Minecraft! OTN's Pablo Ciccarello was there to support the community.  He did several video interviews with JUG leaders and speakers (mine included). You can watch more videos on his TDC Florianópolis playlist.  Thank you, Oracle and OTN for all your support. We interacted with thousands of Java developers at The Developer's Conference Florianópolis. If you want to join us, we are planning two more conferences this year: The Developer's Conference São Paulo, July  The Developer's Conference Porto Alegre, October 

    Read the article

  • An invitation to join a JDeveloper and ADF productivity clinic (and more!) at KScope

    - by Chris Muir
    Would you like a chance to influence Oracle's decisions on tool usability and productivity? If you're attending ODTUG's Kaleidoscope conference this year in San Antonio, Oracle would like to invite you to participate in our Usability Activity Research and separately our JDeveloper and ADF Productivity Clinics with our experienced user experience teams.  The teams are keen to hear what you have to say about your experiences with our tools in general and specifically JDeveloper and ADF.  The details of each event are described below. Invitation to Usability Activity - Sunday June 24th to Wednesday June 27th Oracle is constantly working on new tools and new features for developers, and invites YOU to become a key part of the process!  As a special addition to Kscope 12, Oracle will be conducting onsite usability research in the Alyssum room, from Sunday June 24 to Wednesday June 27. Usability activities are scheduled ahead of time for participants' convenience.  If you would like to take part, please fill out this form to let us know of the session(s) that you would like to attend and your development experience. You will be emailed with your scheduled session before the start of the conference. JDeveloper and ADF Productivity Clinic - Thursday June 28th Are you concerned that Java, Oracle ADF or JDeveloper is difficult? Is JDeveloper making you jump through hoops?  Do you hate a particular dialog or feature of JDeveloper? Well, come and get things off your chest! Oracle is hosting a product management and user experience clinic where we want to hear about your issues and concerns. What's difficult to use?  What doesn't work the way you want, and how would you want it to work?  What isn't behaving like your current favorite tool?  If we can't help on you the spot, we'll take your feedback and use it to improve the product experience.  A great opportunity to get answers, or get improvements. Drop by the Alyssum room, anytime from 8:30 to 10:30 on Thursday, June 28. We look forward to seeing you at KScope soon! 

    Read the article

  • What are the recommended BEST CASE hardware requirements for TFS 2010

    - by Doug
    Hi guys, i have installed TFS 2010 in a 2 server setup with an App Tier server and a SQL Server and am not 100% happy with the performance. Both are running in VM's on SAN disks and have been given the following virtual hardware each: Windows 2008 R2 1 CPU @ 2.8Ghz 2gb RAM what should i lift - neither machine is hammered but both do go up to 80% when people are doing things on them - should i add another CPU to each - usually this is now required in a VMWARE setup but i don't know if TFS 2010 takes advantage of an extra core??? thank you in advance :-)

    Read the article

  • Javascript help - compare two dates in (dd/MMM/yyyy) format

    - by Kettenbach
    Hi All, I am trying to compare to textbox values on an aspx form. Both the values represent dates. The are formatted like "dd/MMM/yyyy". How can I compare them in javascript to see if they are not equal and which one is greater etc... Does javascript have a date constructor for strings like this? Any tips would be appreciated. Thanks for the help. Cheers, ~ck in San Diego

    Read the article

  • Javascript help - compare two dates in Euro (dd/MMM/yyyy) format

    - by Kettenbach
    Hi All, I am trying to compare to textbox values on an aspx form. Both the values represent dates. The are formatted like "dd/MMM/yyyy". How can I compare them in javascript to see if they are not equal and which one is greater etc... Does javascript have a date constructor for strings like this? Any tips would be appreciated. Thanks for the help. Cheers, ~ck in San Diego

    Read the article

  • jQuery clone chained selects

    - by Volkaria
    I just started from: http://jsfiddle.net/FJFFJ/1/ (by Chain dynamically created dropdowns with JQuery) It's really good but now I need to change a bit: clone the last group of selects. ie.: +- Argentina | San Juan | Rawson Chile | Santiago | Chiñihue Then, if I click at "+", it will clone Chile | Santiago | Chiñihue instead of the first one. Can someone help, please? Thanks on Advance!

    Read the article

  • iphone google maps - direction using latitude and longitude

    - by satyam
    I'm working on iphone app. I know my current location (latitude and longitude) and destination's (latitude and longitude). How can I use "Google maps" to find the directions. URL for google maps is something like "http://maps.google.com/maps?daddr=San+Francisco,+CA&saddr=cupertino" From above URL, inspite of source and destination places i want to use latitude and longitude. how?

    Read the article

  • mysql innodb:innodb_flush_method

    - by Daniel
    in the following link http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_flush_method it says:Different values of this variable can have a marked effect on InnoDB performance. For example, on some systems where InnoDB data and log files are located on a SAN, it has been found that setting innodb_flush_method to O_DIRECT can degrade performance of simple SELECT statements by a factor of three. Why O_DIRECT could slow down the select statement?

    Read the article

  • IIS7 web farm - local or shared content?

    - by rbeier
    We're setting up an IIS7 web farm with two servers. Should each server have its own local copy of the content, or should they pull content directly from a UNC share? What are the pros and cons of each approach? We currently have a single live server WEB1, with content stored locally on a separate partition. A job periodically syncs WEB1 to a standby server WEB2, using robocopy for content and msdeploy for config. If WEB1 goes down, Nagios notifies us, and we manually run a script to move the IP addresses to WEB2's network interface. Both servers are actually VMs running on separate VMWare ESX 4 hosts. The servers are domain-joined. We have around 50-60 live sites on WEB1 - mostly ASP.NET, with a few that are just static HTML. Most are low-traffic "microsites". A few have moderate traffic, but none are massive. We'd like to change this so both WEB1 and WEB2 are actively serving content. This is mainly for reliability - if WEB1 goes down, we don't want to have to manually intervene to fail things over. Spreading the load is also nice, but the load is not high enough right now for us to need this. We're planning to configure our firewall to balance traffic across the two servers. It will detect when a server goes down and will send all the traffic to the remaining live server. We're planning to use sticky sessions for now... eventually we may move to SQL Server session state and stateless load balancing. But we need a way for the servers to share content. We were originally planning to move all the content to a UNC share. Our storage provider says they can set up a highly available SMB share for us. So if we go the UNC route, the storage shouldn't be a single point of failure. But we're wondering about the downsides to this approach: We'll need to change the physical paths for each site and virtual directory. There are also some projects that have absolute paths in their web.config files - we'll have to update those as well. We'll need to create a domain user for the web servers to access the share, and grant that user appropriate permissions. I haven't looked into this yet - I'm not sure if the application pool identity needs to be changed to this user, or if there's another way to tell IIS to use this account when connecting to the share. Sites will no longer be able to access their content if there's ever an Active Directory problem. In general, it just seems a lot more complicated, with more moving parts that could break. Our storage provider would create a volume for us on their redundant SAN. If I understand correctly, this SAN volume would be mounted on a VM running in their redundant VMWare environment; this VM would then expose the SMB share to our web servers. On the other hand, a benefit of the shared content approach is that we'd only need to deploy code to one place, and there would never be a temporary inconsistency between multiple copies of the content. This thread is pretty interesting, though some of these people are working at a much larger scale. I've just been discussing content so far, but we also need to think about configuration. I don't know if we can just use DFS replication for the applicationHost.config and other files, or if it's best to use the shared configuration feature with the config on a UNC share. What do you think? Thanks for your help, Richard

    Read the article

  • Vagrant ssh fails with VirtualBox

    - by lukewm
    vagrant up fails when it gets to the ssh part: myterminal$ vagrant up [default] VM already created. Booting if its not already running... [default] Running any VM customizations... [default] Clearing any previously set forwarded ports... [default] Forwarding ports... [default] -- ssh: 22 => 2222 (adapter 1) [default] -- db2: 30003 => 30003 (adapter 1) [default] Cleaning previously set shared folders... [default] Creating shared folders metadata... [default] Booting VM... [default] Waiting for VM to boot. This can take a few minutes. [default] Failed to connect to VM! Failed to connect to VM via SSH. Please verify the VM successfully booted by looking at the VirtualBox GUI. Then when I subsequently try and connect using vagrant ssh or vagrant reload or similar, I get this: myterminal$ vagrant reload [default] Attempting graceful shutdown of linux... SSH connection was refused! This usually happens if the VM failed to boot properly. Some steps to try to fix this: First, try reloading your VM with `vagrant reload`, since a simple restart sometimes fixes things. If that doesn't work, destroy your VM and recreate it with a `vagrant destroy` followed by a `vagrant up`. If that doesn't work, contact a Vagrant maintainer (support channels listed on the website) for more assistance. Please help! I'm really stumped. Kind regards, Luke

    Read the article

  • Exchange Server 2007 Setup

    - by AlamedaDad
    Hi, I'm working on a upgrade to Exchange 2007 and I wanted to get some advise on hardware choices. We currently have an Exchange 2003 STD server with 400 users split between 6 AD Sites, that is housed on a single server. We need to move to a redundant, fault tolerant system to support our users. I'm planning on installing 2 Dell 1950 servers with W2k8-std to act as CAS and Hub servers, with NLB to allow abstraction of the actual server name to the users. There won't be an edge system since we have a Barracuda box already that will handle in/out spam/virus filtering. Backend I'm planning on 2 mailbox servers which will be Dell 2950s with 16GB RAM, 2 either dual-core or quad-core CPUs and 6 300GB SAS drives in some RAID config. These systems will be clustered using W2k8 Ent clustering and running CCR in Exchange. My questions are as follows: Is 16GB enough RAM for serving that many mailboxes along with the windows clustering and ccr? I'm trying to figure out disk layouts and I'm unsure of whether to use all local disk or some local and some SAN, via an OpenFiler iSCSI server. The SAN would be a Dell 2850 with 6 - 300GB SCSI drives and a PERC controller to slice as I want, with 8GB RAM. Option 1: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 1 - Mail stores Option 2: 2 drives, RAID 1 - OS and logs 4 drives, RAID 5 - Mail Stores and scratch space for eseutil. Option 3: 2 drives, RAID 1 - OS 2 drives, RAID 1 - Logs 2 drives, RAID 0 - scratch space ~300GB iSCSI volume for mail stores Option 4: 2 drives, RAID 1 - OS 4 drives, RAID 5 - scratch space ~300GB iSCSI volume for mail stores ~300GB iSCSI volume for logs I have 2 sockets for CPUs and need to chose between dual and quad cores. The dual core have faster clocks but less cache and I'm thinking older architecture. Am I better off with more cores and cache while sacraficing clock speed? I am planning on adding the new E2K7 cluster to the E2K3 server and then move each mailbox over, all at once, then remove the old server. This seems more complicated than simply getting rid of the 2003 server and then adding the 2007 cluster and restoring the mailboxes using PowerControls or exmerge. The migration option lets me do this on my time, where a cutover means it all needs to work at once. If I go with the cutover method, how can I prebuild the servers and add them to the domain right after removing the 2003 server, or can't I? I think the answer is no and the migration is my only real option if I want to prebuild. I need to also migrate about 30GB of Public Folders. Is there anything special about this, other than specifying in the E2K7 install that I want older Outlook clients and PF's setup? I guess I could even keep the E2K3 server to host just the PFs? Lastly, if I have a mix of Outlook 200, 2003 and 2007 what do I need to do to make sure they all have access to the GAL and OAB? At time of cutover, we'll be at like 90% 2007, but we will have some older stuff around. My plan is to use Outlook Anywhere on laptops that are used outside the physical network. Are there any gotchas involved in that? I'm even thinking about using is for all Outlook clients, does anyone do that? The reason I'm considering it is that our WAN is really VPN tunnels over internet connections, so not a fully messhed, stable WAN. Thank you all very much for the assistance in advance and I look forward to discussion of these points! Regards...Michael

    Read the article

  • How to count the most recent value based on multiple criteria?

    - by Andrew
    I keep a log of phone calls like the following where the F column is LVM = Left Voice Mail, U = Unsuccessful, S = Successful. A1 1 B1 Smith C1 John D1 11/21/2012 E1 8:00 AM F1 LVM A2 2 B2 Smith C2 John D2 11/22/2012 E1 8:15 AM F2 U A3 3 B3 Harvey C3 Luke D3 11/22/2012 E1 8:30 AM F3 S A4 4 B4 Smith C4 John D4 11/22/2012 E1 9:00 AM F4 S A5 5 B5 Smith C5 John D5 11/23/2012 E5 8:00 AM F5 LVM This is a small sample. I actually have over 700 entries. In my line of work, it is important to know how many unsuccessful (LVM or U) calls I have made since the last Successful one (S). Since values in the F column can repeat, I need to take into consideration both the B and C column. Also, since I can make a successful call with a client and then be trying to contact them again, I need to be able to count from the last successful call. My G column is completely open which is where I would like to put a running total for each client (G5 would = 1 ideally while G4 = 0, G3 = 0, G2 = 2, G1 = 1 but I want these values calculated automatically so that I do not have scroll through 700 names).

    Read the article

  • How do you re-mount an ext3 fs readwrite after it gets mounted readonly from a disk error?

    - by cagenut
    Its a relatively common problem when something goes wrong in a SAN for ext3 to detect the disk write errors and remount the filesystem read-only. Thats all well and good, only when the SAN is fixed I can't figure out how to re-re-mount the filesystem read-write without rebooting. Behold: [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][active] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] [root@localhost ~]# mount /dev/mapper/mpath0 /mnt/foo [root@localhost ~]# touch /mnt/foo/blah All good, now I yank the LUN out from under it. [root@localhost ~]# touch /mnt/foo/blah [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system [root@localhost ~]# tail /var/log/messages Mar 18 13:17:33 localhost multipathd: sdb: tur checker reports path is down Mar 18 13:17:34 localhost multipathd: sdc: tur checker reports path is down Mar 18 13:17:35 localhost kernel: Aborting journal on device dm-2. Mar 18 13:17:35 localhost kernel: Buffer I/O error on device dm-2, logical block 1545 Mar 18 13:17:35 localhost kernel: lost page write due to I/O error on dm-2 Mar 18 13:17:36 localhost kernel: ext3_abort called. Mar 18 13:17:36 localhost kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Mar 18 13:17:36 localhost kernel: Remounting filesystem read-only It only thinks its read-only, in reality its not even there. [root@localhost ~]# multipath -ll sdb: checker msg is "tur checker reports path is down" sdc: checker msg is "tur checker reports path is down" mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][enabled] \_ 1:0:0:1 sdb 8:16 [failed][faulty] \_ 2:0:0:1 sdc 8:32 [failed][faulty] [root@localhost ~]# ll /mnt/foo/ ls: reading directory /mnt/foo/: Input/output error total 20 -rw-r--r-- 1 root root 0 Mar 18 13:11 bar How it still remembers that 'bar' file being there... mystery, but not important right now. Now I re-present the LUN: [root@localhost ~]# tail /var/log/messages Mar 18 13:23:58 localhost multipathd: sdb: tur checker reports path is up Mar 18 13:23:58 localhost multipathd: 8:16: reinstated Mar 18 13:23:58 localhost multipathd: mpath0: queue_if_no_path enabled Mar 18 13:23:58 localhost multipathd: mpath0: Recovered to normal mode Mar 18 13:23:58 localhost multipathd: mpath0: remaining active paths: 1 Mar 18 13:23:58 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:58 localhost multipathd: dm-2: devmap already registered Mar 18 13:23:59 localhost multipathd: sdc: tur checker reports path is up Mar 18 13:23:59 localhost multipathd: 8:32: reinstated Mar 18 13:23:59 localhost multipathd: mpath0: remaining active paths: 2 Mar 18 13:23:59 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:59 localhost multipathd: dm-2: devmap already registered [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][enabled] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] Great right? It says [rw] right there. Not so fast: [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system OK, doesn't do it automatically, I'll just give it a little push: [root@localhost ~]# mount -o remount /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only Noooooooooo. I have tried all sorts of different mount/tune2fs/dmsetup commands and I cannot figure out how to get it to un-flag the block device as write-protected. Rebooting will fix it, but I'd much rather do it on-line. An hour of googling has gotten me nowhere either. Save me ServerFault.

    Read the article

  • Just a few questions about Hyper-V virtual machines and clustering

    - by René Kåbis
    I have been using Microsoft’s Hyper-V technology for a little while now, but I am just now dipping my toe into clustering. In particular, I am trying to implement a fault-tolerant SQL DB. This involves setting up two VMs, clustering them via Failover Cluster, and then installing SQL Server in some fashion. I have two physical machines - one high-end and rather beefy “heavy lifter” to contain the majority of the VMs, and another “backup” (a repurposed desktop) to hold the essential “secondary” (or failover) AD-DC, SQL and FS VMs. The main reason why I find the failover cluster at the VM level so attractive is that it presents a single IP and DNS entry to the network as a whole - if one machine (physical or virtual) goes down, you might loose some ping and the connections get reset, but the network applications (Microsoft RMS connection to backend SQL) can still connect to a viable DB without having to mess around with the settings at all. My first question is in terms of SQL Server itself. If I have a cluster between two VMs, does it make more sense to install the SQL Server in Failover Cluster configuration or should I simply install it in a stand-alone config and mirror the DBs? For example, this post suggests just mirroring the DBs, but do I just mirror standalone DBs on standalone VMs, or can I get the network and failover benefits of clustered VMs while still utilizing (on each clustered VM) standalone DBs that have been mirrored between each other? As well, I have come across a lot of documentation about SQL clustering, but most assume a number (#2) of physical machines to hold not only the actual SQL VMs but also the Quorum and Witness stores. I will not be able to muster more than two physical machines. As such, I will have to be satisfied with a VM cluster that does not exceed two VMs (one for each physical machine). Another issue involves MSDTC - the Distributed Transaction Coordinator. When attempting to install the SQL Failover Cluster (I never completed it for this reason) it threw a hissy fit because MSDTC had not been clustered. Search as I might, I have not yet found a way to do so under Windows Server 2012 R2. I have found plenty of docs for Windows 2008 and 2008 R2, but these instructions don’t align with 2012 R2 (at least, not in a way that allows me to successfully cluster MSDTC). Plus, some of the instructions that I have found for SQL Server Failover Cluster installation suggest that a third “network device” - shared network storage (a SAN) - is required for the DB itself (and other functionality). I do not have this, and won’t be getting this. Most of my storage exists on the “heavy lifter” that was designed for all of the “primary” VMs. If that physical machine goes down, so does the storage. The secondary server does have enough resources for an AD-DC Server, an SQL server and a File Server, so it will handle the “secondary” failover versions of those VMs (clustered or not). My final question involves file servers. If I cluster file servers between two VMs (one on my “heavy lifter” and another on my “backup”, how do I mirror the data between them? Clustering VMs only provides a single point of access on the network for a resource, it doesn’t exactly replicate data between the two - that is left to the services that serve up that data. I am unsure how I can ensure that file server data between two clustered file server VMs can be properly mirrored. Remember, I only have two devices to be used here - my primary machine and a backup secondary. There is no chance of me obtaining a SAN or any other type of network attached storage. What exists on the machines must act as the storage. Thanks in advance for any suggestions.

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • How do you re-mount an ext3 fs readwrite after it gets mounted readonly from a disk error?

    - by cagenut
    Its a relatively common problem when something goes wrong in a SAN for ext3 to detect the disk write errors and remount the filesystem read-only. Thats all well and good, only when the SAN is fixed I can't figure out how to re-re-mount the filesystem read-write without rebooting. Behold: [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][active] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] [root@localhost ~]# mount /dev/mapper/mpath0 /mnt/foo [root@localhost ~]# touch /mnt/foo/blah All good, now I yank the LUN out from under it. [root@localhost ~]# touch /mnt/foo/blah [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system [root@localhost ~]# tail /var/log/messages Mar 18 13:17:33 localhost multipathd: sdb: tur checker reports path is down Mar 18 13:17:34 localhost multipathd: sdc: tur checker reports path is down Mar 18 13:17:35 localhost kernel: Aborting journal on device dm-2. Mar 18 13:17:35 localhost kernel: Buffer I/O error on device dm-2, logical block 1545 Mar 18 13:17:35 localhost kernel: lost page write due to I/O error on dm-2 Mar 18 13:17:36 localhost kernel: ext3_abort called. Mar 18 13:17:36 localhost kernel: EXT3-fs error (device dm-2): ext3_journal_start_sb: Detected aborted journal Mar 18 13:17:36 localhost kernel: Remounting filesystem read-only It only thinks its read-only, in reality its not even there. [root@localhost ~]# multipath -ll sdb: checker msg is "tur checker reports path is down" sdc: checker msg is "tur checker reports path is down" mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=0][hwhandler=0][rw] \_ round-robin 0 [prio=0][enabled] \_ 1:0:0:1 sdb 8:16 [failed][faulty] \_ 2:0:0:1 sdc 8:32 [failed][faulty] [root@localhost ~]# ll /mnt/foo/ ls: reading directory /mnt/foo/: Input/output error total 20 -rw-r--r-- 1 root root 0 Mar 18 13:11 bar How it still remembers that 'bar' file being there... mystery, but not important right now. Now I re-present the LUN: [root@localhost ~]# tail /var/log/messages Mar 18 13:23:58 localhost multipathd: sdb: tur checker reports path is up Mar 18 13:23:58 localhost multipathd: 8:16: reinstated Mar 18 13:23:58 localhost multipathd: mpath0: queue_if_no_path enabled Mar 18 13:23:58 localhost multipathd: mpath0: Recovered to normal mode Mar 18 13:23:58 localhost multipathd: mpath0: remaining active paths: 1 Mar 18 13:23:58 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:58 localhost multipathd: dm-2: devmap already registered Mar 18 13:23:59 localhost multipathd: sdc: tur checker reports path is up Mar 18 13:23:59 localhost multipathd: 8:32: reinstated Mar 18 13:23:59 localhost multipathd: mpath0: remaining active paths: 2 Mar 18 13:23:59 localhost multipathd: dm-2: add map (uevent) Mar 18 13:23:59 localhost multipathd: dm-2: devmap already registered [root@localhost ~]# multipath -ll mpath0 (36001f93000a310000299000200000000) dm-2 XIOTECH,ISE1400 [size=1.1T][features=1 queue_if_no_path][hwhandler=0][rw] \_ round-robin 0 [prio=2][enabled] \_ 1:0:0:1 sdb 8:16 [active][ready] \_ 2:0:0:1 sdc 8:32 [active][ready] Great right? It says [rw] right there. Not so fast: [root@localhost ~]# touch /mnt/foo/blah touch: cannot touch `/mnt/foo/blah': Read-only file system OK, doesn't do it automatically, I'll just give it a little push: [root@localhost ~]# mount -o remount /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only The hell you are: [root@localhost ~]# mount -o remount,rw /mnt/foo mount: block device /dev/mapper/mpath0 is write-protected, mounting read-only Noooooooooo. I have tried all sorts of different mount/tune2fs/dmsetup commands and I cannot figure out how to get it to un-flag the block device as write-protected. Rebooting will fix it, but I'd much rather do it on-line. An hour of googling has gotten me nowhere either. Save me ServerFault.

    Read the article

  • Developer’s Life – Attitude and Communication – They Can Cause Problems – Notes from the Field #027

    - by Pinal Dave
    [Note from Pinal]: This is a 27th episode of Notes from the Field series. The biggest challenge for anyone is to understand human nature. We human have so many things on our mind at any moment of time. There are cases when what we say is not what we mean and there are cases where what we mean we do not say. We do say and things as per our mood and our agenda in mind. Sometimes there are incidents when our attitude creates confusion in the communication and we end up creating a situation which is absolutely not warranted. In this episode of the Notes from the Field series database expert Mike Walsh explains a very crucial issue we face in our career, which is not technical but more to relate to human nature. Read on this may be the best blog post you might read in recent times. In this week’s note from the field, I’m taking a slight departure from technical knowledge and concepts explained. We’ll be back to it next week, I’m sure. Pinal wanted us to explain some of the issues we bump into and how we see some of our customers arrive at problem situations and how we have helped get them back on the right track. Often it is a technical problem we are officially solving – but in a lot of cases as a consultant, we are really helping fix some communication difficulties. This is a technical blog post and not an “advice column” in a newspaper – but the longer I am a consultant, the more years I add to my experience in technology the more I learn that the vast majority of the problems we encounter have “soft skills” included in the chain of causes for the issue we are helping overcome. This is not going to be exhaustive but I hope that sharing four pieces of advice inspired by real issues starts a process of searching for places where we can be the cause of these challenges and look at fixing them in ourselves. Or perhaps we can begin looking at resolving them in teams that we manage. I’ll share three statements that I’ve either heard, read or said and talk about some of the communication or attitude challenges highlighted by the statement. 1 – “But that’s the SAN Administrator’s responsibility…” I heard that early on in my consulting career when talking with a customer who had serious corruption and no good recent backups – potentially no good backups at all. The statement doesn’t have to be this one exactly, but the attitude here is an attitude of “my job stops here, and I don’t care about the intent or principle of why I’m here.” It’s also a situation of having the attitude that as long as there is someone else to blame, I’m fine…  You see in this case, the DBA had a suspicion that the backups were not being handled right.  They were the DBA and they knew that they had responsibility to ensure SQL backups were good to go – it’s a basic requirement of a production DBA. In my “As A DBA Where Do I start?!” presentation, I argue that is job #1 of a DBA. But in this case, the thought was that there was someone else to blame. Rather than create extra work and take on responsibility it was decided to just let it be another team’s responsibility. This failed the company, the company’s customers and no one won. As technologists – we should strive to go the extra mile. If there is a lack of clarity around roles and responsibilities and we know it – we should push to get it resolved. Especially as the DBAs who should act as the advocates of the data contained in the databases we are responsible for. 2 – “We’ve always done it this way, it’s never caused a problem before!” Complacency. I have to say that many failures I’ve been paid good money to help recover from would have not happened had it been for an attitude of complacency. If any thoughts like this have entered your mind about your situation you may be suffering from it. If, while reading this, you get this sinking feeling in your stomach about that one thing you know should be fixed but haven’t done it.. Why don’t you stop and go fix it then come back.. “We should have better backups, but we’re on a SAN so we should be fine really.” “Technically speaking that could happen, but what are the chances?” “We’ll just clean that up as a fast follow” ..and so on. In the age of tightening IT budgets, increased expectations of up time, availability and performance there is no room for complacency. Our customers and business units expect – no demand – the best. Complacency says “we will give you second best or hopefully good enough and we accept the risk and know this may hurt us later. Sometimes an organization will opt for “good enough” and I agree with the concept that at times the perfect can be the enemy of the good. But when we make those decisions in a vacuum and are not reporting them up and discussing them as an organization that is different. That is us unilaterally choosing to do something less than the best and purposefully playing a game of chance. 3 – “This device must accept interference from other devices but not create any” I’ve paraphrased this one – but it’s something the Federal Communications Commission – a federal agency in the United States that regulates electronic communication – requires of all manufacturers of any device that could cause or receive interference electronically. I blogged in depth about this here (http://www.straightpathsql.com/archives/2011/07/relationship-advice-from-the-fcc/) so I won’t go into much detail other than to say this… If we all operated more on the premise that we should do our best to not be the cause of conflict, and to be less easily offended and less upset when we perceive offense life would be easier in many areas! This doesn’t always cause the issues we are called in to help out. Not directly. But where we see it is in unhealthy relationships between the various technology teams at a client. We’ll see teams hoarding knowledge, not sharing well with others and almost working against other teams instead of working with them. If you trace these problems back far enough it often stems from someone or some group of people violating this principle from the FCC. To Sum It Up Technology problems are easy to solve. At Linchpin People we help many customers get past the toughest technological challenge – and at the end of the day it is really just a repeatable process of pattern based troubleshooting, logical thinking and starting at the beginning and carefully stepping through to the end. It’s easy at the end of the day. The tough part of what we do as consultants is the people skills. Being able to help get teams working together, being able to help teams take responsibility, to improve team to team communication? That is the difficult part, and we get to use the soft skills on every engagement. Work on professional development (http://professionaldevelopment.sqlpass.org/) and see continuing improvement here, not just with technology. I can teach just about anyone how to be an excellent DBA and performance tuner, but some of these soft skills are much more difficult to teach. If you want to get started with performance analytics and triage of virtualized SQL Servers with the help of experts, read more over at Fix Your SQL Server. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Notes from the Field, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • April Oracle Database Events

    - by Mandy Ho
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 17-18, 2012 – Moscow, Russia Oracle Develop Conference The Oracle database developer conference, Oracle Develop, will visit Moscow, Russia and Hyderabad, India this spring. Oracle Develop includes a database development track, which contains .NET sessions and hands-on lab led by an Oracle .NET product manager. Register today before the event fills up. http://www.oracle.com/javaone/ru-en/index.html Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 24, 26 – San Diego, CA & San Jose, CA ISC2 Leadership Regional Event Series Oracle at (ISC)2 Security Leadership Series: Herding Clouds -- Managing Cloud Security Concerns and Expectations Join us for his interactive day-long session where industry leaders, including Oracle solution experts, will talk about how to: dispel the top Cloud security myths minimize "rogue Cloud" implementations identify potential compliance pitfalls and how to avoid them develop contract language for your cloud providers manage users across your fractured datacenter leverage existing technologies to protect data as it moves from the enterprise to the cloud http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=146972&src=7239493&src=7239493&Act=228 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 22-26, 2012 – Las Vegas, NV IOUG Collaborate 12 From April 22-26, 2012, Oracle takes Las Vegas. Thousands of Oracle professionals will descend upon the Mandalay Bay Convention Center for a weeks worth of education sessions, networking opportunities and more, at the only user-driven and user-run Oracle conference - COLLABORATE 12.  Your COLLABORATE 12 - IOUG Forum registration comes complete with a bonus- a full day Deep Dive education program on Sunday! Choose from numerous hot topics, including Real World Performance, High Availability and more, presented by legendary and seasoned Oracle pros, including Tom Kyte and Craig Shallahamer. http://www.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=143637&src=7360364&src=7360364&Act=5 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Independent Oracle User Group (IOUG) Regional Events: April 16, 2012 – Columbus, Ohio Ohio Oracle Users Group Higher Performance PL/PQL and Oracle Database 11g PL/SQL New Features – Featuring Steven Feuerstein http://www.ooug.org/ Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} April 26, 2012- Irving, TX Dallas Oracle Users Group Oracle Database Forum: 5-7PM Oracle Corporation, 6031 Connection Drive Irving, TX http://memberservices.membee.com/538/irmevents.aspx?id=64 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif";} Apr 30, 2012- Los Angeles, CA Real World Performance Tour A full day of real world database performance with: Tom Kyte, author of the ever popular AskTom Blog, Andrew Holdsworth, head of Oracle's Real World Performance Team, and Graham Wood, renowned Oracle Database performance architect. Through discussion, debate and demos, they’ll show you how to master performance engineering topics like: Best practices for designing hardware architectures and how to spot and fix bad design. How to develop applications that deliver the fastest possible performance without sacrificing accuracy.  New for 2012! Updates on Enterprise Manager, Exadata, and what these technologies mean to your current systems. http://www.ioug.org/Events/ADayofRealWorldPerformance/tabid/194/Default.aspx

    Read the article

  • The SPARC SuperCluster

    - by Karoly Vegh
    Oracle has been providing a lead in the Engineered Systems business for quite a while now, in accordance with the motto "Hardware and Software Engineered to Work Together." Indeed it is hard to find a better definition of these systems.  Allow me to summarize the idea. It is:  Build a compute platform optimized to run your technologies Develop application aware, intelligently caching storage components Take an impressively fast network technology interconnecting it with the compute nodes Tune the application to scale with the nodes to yet unseen performance Reduce the amount of data moving via compression Provide this all in a pre-integrated single product with a single-pane management interface All these ideas have been around in IT for quite some time now. The real Oracle advantage is adding the last one to put these all together. Oracle has built quite a portfolio of Engineered Systems, to run its technologies - and run those like they never ran before. In this post I'll focus on one of them that serves as a consolidation demigod, a multi-purpose engineered system.  As you probably have guessed, I am talking about the SPARC SuperCluster. It has many great features inherited from its predecessors, and it adds several new ones. Allow me to pick out and elaborate about some of the most interesting ones from a technological point of view.  I. It is the SPARC SuperCluster T4-4. That is, as compute nodes, it includes SPARC T4-4 servers that we learned to appreciate and respect for their features: The SPARC T4 CPUs: Each CPU has 8 cores, each core runs 8 threads. The SPARC T4-4 servers have 4 sockets. That is, a single compute node can in parallel, simultaneously  execute 256 threads. Now, a full-rack SPARC SuperCluster has 4 of these servers on board. Remember the keyword demigod.  While retaining the forerunner SPARC T3's exceptional throughput, the SPARC T4 CPUs raise the bar with single performance too - a humble 5x better one than their ancestors.  actually, the SPARC T4 CPU cores run in both single-threaded and multi-threaded mode, and switch between these two on-the-fly, fulfilling not only single-threaded OR multi-threaded applications' needs, but even mixed requirements (like in database workloads!). Data security, anyone? Every SPARC T4 CPU core has a built-in encryption engine, that is, encryption algorithms cast into silicon.  A PCI controller right on the chip for customers who need I/O performance.  Built-in, no-cost Virtualization:  Oracle VM for SPARC (the former LDoms or Logical Domains) is not a server-emulation virtualization technology but rather a serverpartitioning one, the hypervisor runs in the server firmware, and all the VMs' HW resources (I/O, CPU, memory) are accessed natively, without performance overhead.  This enables customers to run a number of Solaris 10 and Solaris 11 VMs separated, independent of each other within a physical server II. For Database performance, it includes Exadata Storage Cells - one of the main reasons why the Exadata Database Machine performs at diabolic speed. What makes them important? They provide DB backend storage for your Oracle Databases to run on the SPARC SuperCluster, that is what they are built and tuned for DB performance.  These storage cells are SQL-aware.  That is, if a SPARC T4 database compute node executes a query, it doesn't simply request tons of raw datablocks from the storage, filters the received data, and throws away most of it where the statement doesn't apply, but provides the SQL query to the storage node too. The storage cell software speaks SQL, that is, it is able to prefilter and through that transfer only the relevant data. With this, the traffic between database nodes and storage cells is reduced immensely. Less I/O is a good thing - as they say, all the CPUs of the world do one thing just as fast as any other - and that is waiting for I/O.  They don't only pre-filter, but also provide data preprocessing features - e.g. if a DB-node requests an aggregate of data, they can calculate it, and handover only the results, not the whole set. Again, less data to transfer.  They support the magical HCC, (Hybrid Columnar Compression). That is, data can be stored in a precompressed form on the storage. Less data to transfer.  Of course one can't simply rely on disks for performance, there is Flash Storage included there for caching.  III. The low latency, high-speed backbone network: InfiniBand, that interconnects all the members with: Real High Speed: 40 Gbit/s. Full Duplex, of course. Oh, and a really low latency.  RDMA. Remote Direct Memory Access. This technology allows the DB nodes to do exactly that. Remotely, directly placing SQL commands into the Memory of the storage cells. Dodging all the network-stack bottlenecks, avoiding overhead, placing requests directly into the process queue.  You can also run IP over InfiniBand if you please - that's the way the compute nodes can communicate with each other.  IV. Including a general-purpose storage too: the ZFSSA, which is a unified storage, providing NAS and SAN access too, with the following features:  NFS over RDMA over InfiniBand. Nothing is faster network-filesystem-wise.  All the ZFS features onboard, hybrid storage pools, compression, deduplication, snapshot, replication, NFS and CIFS shares Storageheads in a HA-Cluster configuration providing availability of the data  DTrace Live Analytics in a web-based Administration UI Being a general purpose application data storage for your non-database applications running on the SPARC SuperCluster over whichever protocol they prefer, easily replicating, snapshotting, cloning data for them.  There's a lot of great technology included in Oracle's SPARC SuperCluster, we have talked its interior through. As for external scalability: you can start with a half- of full- rack SPARC SuperCluster, and scale out to several racks - that is, stacking not separate full-rack SPARC SuperClusters, but extending always one large instance of the size of several full-racks. Yes, over InfiniBand network. Add racks as you grow.  What technologies shall run on it? SPARC SuperCluster is a general purpose scaleout consolidation/cloud environment. You can run Oracle Databases with RAC scaling, or Oracle Weblogic (end enjoy the SPARC T4's advantages to run Java). Remember, Oracle technologies have been integrated with the Oracle Engineered Systems - this is the Oracle on Oracle advantage. But you can run other software environments such as SAP if you please too. Run any application that runs on Oracle Solaris 10 or Solaris 11. Separate them in Virtual Machines, or even Oracle Solaris Zones, monitor and manage those from a central UI. Here the key takeaways once again: The SPARC SuperCluster: Is a pre-integrated Engineered System Contains SPARC T4-4 servers with built-in virtualization, cryptography, dynamic threading Contains the Exadata storage cells that intelligently offload the burden of the DB-nodes  Contains a highly available ZFS Storage Appliance, that provides SAN/NAS storage in a unified way Combines all these elements over a high-speed, low-latency backbone network implemented with InfiniBand Can grow from a single half-rack to several full-rack size Supports the consolidation of hundreds of applications To summarize: All these technologies are great by themselves, but the real value is like in every other Oracle Engineered System: Integration. All these technologies are tuned to perform together. Together they are way more than the sum of all - and a careful and actually very time consuming integration process is necessary to orchestrate all these for performance. The SPARC SuperCluster's goal is to enable infrastructure operations and offer a pre-integrated solution that can be architected and delivered in hours instead of months of evaluations and tests. The tedious and most importantly time and resource consuming part of the work - testing and evaluating - has been done.  Now go, provide services.   -- charlie  

    Read the article

  • CascadingDropDown jQuery Plugin for ASP.NET MVC

    - by rajbk
    CascadingDropDown is a jQuery plugin that can be used by a select list to get automatic population using AJAX. A sample ASP.NET MVC project is attached at the bottom of this post.   Usage The code below shows two select lists : <select id="customerID" name="customerID"> <option value="ALFKI">Maria Anders</option> <option value="ANATR">Ana Trujillo</option> <option value="ANTON">Antonio Moreno</option> </select>   <select id="orderID" name="orderID"> </select> When a customer is selected in the first select list, the second list will auto populate itself with the following code: $("#orderID").CascadingDropDown("#customerID", '/Sales/AsyncOrders'); Internally, an AJAX post is made to ‘/Sales/AsyncOrders’ with the post body containing  customerID=[selectedCustomerID]. This executes the action AsyncOrders on the SalesController with signature AsyncOrders(string customerID).  The AsyncOrders method returns JSON which is then used to populate the select list. The JSON format expected is shown below : [{ "Text": "John", "Value": "10326" }, { "Text": "Jane", "Value": "10801" }] Details $(targetID).CascadingDropDown(sourceID, url, settings) targetID The ID of the select list that will auto populate.  sourceID The ID of the select list, which, on change, causes the targetID to auto populate. url The url to post to Options promptText Text for the first item in the select list Default : -- Select -- loadingText Optional text to display in the select list while it is being loaded. Default : Loading.. errorText Optional text to display if an error occurs while populating the list Default: Error loading data. postData Data you want posted to the url in place of the default Example : { postData : { customerID : $(‘#custID’), orderID : $(‘#orderID’) }} will cause customerID=ALFKI&orderID=2343 to be sent as the POST body. Default: A text string obtained by calling serialize on the sourceID onLoading (event) Raised before the list is populated. onLoaded (event) Raised after the list is populated, The code below shows how to “animate” the  select list after load. Example using custom options: $("#orderID").CascadingDropDown("#customerID", '/Sales/AsyncOrders', { promptText: '-- Pick an Order--', onLoading: function () { $(this).css("background-color", "#ff3"); }, onLoaded: function () { $(this).animate({ backgroundColor: '#ffffff' }, 300); } }); To return JSON from our action method, we use the Json ActionResult passing in an IEnumerable<SelectListItem>. public ActionResult AsyncOrders(string customerID) { var orders = repository.GetOrders(customerID).ToList().Select(a => new SelectListItem() { Text = a.OrderDate.HasValue ? a.OrderDate.Value.ToString("MM/dd/yyyy") : "[ No Date ]", Value = a.OrderID.ToString(), }); return Json(orders); } Sample Project using VS 2010 RTM NorthwindCascading.zip

    Read the article

< Previous Page | 45 46 47 48 49 50 51 52 53 54 55 56  | Next Page >