Search Results

Search found 28682 results on 1148 pages for 'drop down menu'.

Page 458/1148 | < Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >

  • Referrer Strings

    - by WernerCD
    I'm not exactly sure how to ask this, but here I go. I work for a company that provides IT services. Our main costumer has a website developed by WebCompany.com, and I act as a maintainer/middle-man for this website for our customer. What's happening is that on the front page there is a drop-down-menu with links. One link goes to a third party website - ThirdParty.com. This third party website automatically logs you in based on your IP or your referrer string. Homepage ThirdParty.com Logged in via IP or referrer string. What's happening... is Website.com is passing along inconsistent referrer strings. I've asked them to look into it and they say it's not their fault, referrer strings are handled at the browser. ThirdParty.com is actually nice enough to have IP and referrer string listed on the front page for users not logged in. So... how can I trace the referrer string so that I can figure out why http://Website.com's front page link to ThirdParty.com is showing a referrer as "None" or "Thirdparty.com" instead of "ThirdParty.com"? Their website is build in PHP. Can you force all links on a website to list "referrer" as "http://website.com/"? instead of "http://website.com/portal/1" or "http://website.com/page3"?

    Read the article

  • Best way to let users choose country/language when submiting an URL to a directory

    - by Claudiu
    I want to offer the user the possibility to add the country/language for websites they would submit to a fairly simple website directory. I have a folder with flags from http://www.famfamfam.com/lab/icons/flags/ . The flag images are named according to the ISO 3166-1 alpha-2 country codes, meaning that I could make a PHP script that would be able to retrieve images and the name of the country retrieved from the image name (not the full name, but it wouldn't be necessary). Just to make things clearer, I couldn't find a proper combo-box jQuery plugin for my needs (that would act exactly like the native but with an icon before the text) and don't really have the time to develop one on my own. Considering the number of images, I also wouldn't just display them all with a radio box near them. Also, having a classic drop-down list would be a nightmare for me as I would have to assign the short country name manually to each entry, or do it once for every country. Offering the user a dropdown list with the short country names but no flag near them would also be unfriendly and confusing. The idea is that every website featured in the directory would have the country flag icon near it. I have the images named properly but I don't know how to let the user choose the right image for their website. Any idees? Thank you all in advance! EDIT Temporary solution is this file: http://www.andrewpatton.com/countrylist.csv It contains a list of countries including various other info, like the short country name, the same name that's used for the flag images. I can take that information and have a classic like this: <select name="countries"> <option value="ro">Romania</option> <option value="ie">Ireland</option> <!-- and so on --> </select> Still, If anybody has a better idea...

    Read the article

  • How the SPARC T4 Processor Optimizes Throughput Capacity: A Case Study

    - by Ruud
    This white paper demonstrates the architected latency hiding features of Oracle’s UltraSPARC T2+ and SPARC T4 processors That is the first sentence from this technical white paper, but what does it exactly mean? Let's consider a very simple example, the computation of a = b + c. This boils down to the following (pseudo-assembler) instructions that need to be executed: load @b, r1 load @c, r2 add r1,r2,r3 store r3, @a The first two instructions load variables b and c from an address in memory (here symbolized by @b and @c respectively). These values go into registers r1 and r2. The third instruction adds the values in r1 and r2. The result goes into register r3. The fourth instruction stores the contents of r3 into the memory address symbolized by @a. If we're lucky, both b and c are in a nearby cache and the load instructions only take a few processor cycles to execute. That is the good case, but what if b or c, or both, have to come from very far away? Perhaps both of them are in the main memory and then it easily takes hundreds of cycles for the values to arrive in the registers. Meanwhile the processor is doing nothing and simply waits for the data to arrive. Actually, it does something. It burns cycles while waiting. That is a waste of time and energy. Why not use these cycles to execute instructions from another application or thread in case of a parallel program? That is exactly what latency hiding on the SPARC T-Series processors does. It is a hardware feature totally transparent to the user and application. As soon as there is a delay in the execution, the hardware uses these otherwise idle cycles to execute instructions from another process. As a result, the throughput capacity of the system improves because idle cycles are no longer wasted and therefore more jobs can be run per unit of time. This feature has been in the SPARC T-series from the beginning, so why this paper? The difference with previous publications on this topic is in the amount of detail given. How this all works under the hood is fully explained using two example programs. Starting from the assembly language instructions, it is demonstrated in what way these programs execute. To really see what is happening we go down to the processor pipeline level, where the gaps in the execution are, and show in what way these idle cycles are filled by other copies of the same program running simultaneously. Both the SPARC T4 as well as the older UltraSPARC T2+ processor are covered. You may wonder why the UltraSPARC T2+ is included. The focus of this work is on the SPARC T4 processor, but to explain the basic concept of latency hiding at this very low level, we start with the UltraSPARC T2+ processor because it is architecturally a much simpler design. From the single issue, in-order pipelines of this processor we then shift gears and cover how this all works on the much more advanced dual issue, out-of-order architecture of the T4. The analysis and performance experiments have been conducted on both processors. The results depend on the processor, but in all cases the theoretical estimates are confirmed by the experiments. If you're interested to read a lot more about this and find out how things really work under the hood, you can download a copy of the paper here. A paper like this could not have been produced without the help of several other people. I want to thank the co-author of this paper, Jared Smolens, for his very valuable contributions and our highly inspiring discussions. I'm also indebted to Thomas Nau (Ulm University, Germany), Shane Sigler and Mark Woodyard (both at Oracle) for their feedback on earlier versions of this paper. Karen Perkins (Perkins Technical Writing and Editing) and Rick Ramsey at Oracle were very helpful in providing editorial and publishing assistance.

    Read the article

  • Rankings dropping after small URL-change WITH 301-redirect

    - by David
    Two weeks ago, we attempted to make the URLs of ca. 12 pages more search-engine friendly. We changed three things. 1. Make URLs more SEF from: /????-????/brandname.html (meaning: /aircon-price/daikin.html to: /????-brandnameinenglish-brandnameinthai.html We set up 301-redirects from the old to the new URLs. You can find an example and the link to our page here: http://bit.ly/XRoTOK There are no direct external links to the old URLs. 2. Added text to img-links from homepage to brand-pages Before those changes, we only linked to those brands with a picture, so we added some text under the picture. You can see that here, in the left submenu: http://bit.ly/XRpfoF 3. Minor changes to Title, h1-Tags, Meta Description, etc. Only minor changes, to better match the on-site optimization with targeted keywords. For example, before we used full brand names, after we used what was really searched for: from: Mitsubishi Electric Mr. Slim to: ???? Mitsubishi (means: Aircon Mitsubishi) Three days after these changes, we noticed a heavy drop (80% loss in non-paid search traffic) in rankings and traffic for those pages, and also for all pages which are sub-categorized. Rankings for all keywords not affected by the changes stayed the same. Any ideas, what happened, and how we can regain our old rankings? What we already did, was submitting a new sitemap. Help much appreciated. Best regards, David

    Read the article

  • Spam bot constantly hitting our site 800-1,000 times a day. Causing loss in sales

    - by akaDanPaul
    For the past 5 months our site has been receiving hits from these 4 sites below; sheratonbd.com newsheraton.com newsheration.com newsheratonltd.com Typically the exact url they come from looks something like this; http://www.newsheraton.com/ClickEarnArea.aspx?loginsession_expiredlogin=85 The spam bot goes to our homepage and stays there for about 1 min and then exist. Luckily we have some pretty beefy servers so it hasn't even come close to overloading our servers yet. Last month I started blocking the IP address's of the spam bots but they seem to keep getting new ones everyday. So far I have blocked over 200 IP address's, below are a few of the ones I have blocked. They all come from Bangladesh. 58.97.238.214 58.97.149.132 180.234.109.108 180.149.31.221 117.18.231.5 117.18.231.12 Since this has been going on for the past 5 months our real site traffic has started to drop, and everyday our orders get lower and lower. Also since these spam bots simply go to our homepage and then leave our bounce rate in analytics has sky rocketed. My questions are; Is it possible that these spam bots are affecting our SEO? 60% of our orders come from natural search, and since this whole thing has started orders have slowly been dropping. What would be the reason someone would want to waste resources in doing this to our site? IP's aren't free and either are domain names, what would be the goal in doing this to us? We have google adwords but don't advertise on extended networks nor advertise in Bangladesh since we don't ship there so they are not making money on adsense. Has anyone experienced anything similar to this? What did you do and what was the final out come?

    Read the article

  • How to organize music files?

    - by newbie
    I'm trying to re-organize my music files. I copied them from my iPod before it died, so they have funky names, like DGEDH.mp3. I tried using a Windows utility to rename them from their ID3 tags, but it didn't work very well--it created a bunch of folders (one for each album in theory, but more like 7 or 8 in reality) and renamed the files with non-English characters. Most of the files are MP3s, but there's at least one or two other file types as well. I'd like to copy them to a new folder and try a Ubuntu utility to rename and re-organize them. In Windows, this would be straightforward--I'd use the Search function (with no file name specified) to list all of the files in the main folder and its subfolders, then drag and drop them to the new folder. What's the easiest way to accomplish the same thing in Ubuntu? The GUI search doesn't seem to accept wildcard characters, and I don't remember all of the file types I have, so it's not as simple as searching for "mp3". Many thanks!

    Read the article

  • Ubuntu keyboard shortcuts - two-part question

    - by Don
    Background: I come from a Windows background and just started dual-booting Ubuntu (my first Linux experience) about 4 days ago. So my systems are Windows 7/Ubuntu 12.04, and so far I'm loving Ubuntu. I am a dedicated mouse-abolitionist (trackpads are hell) and do most of my browsing and navigation with keyboard shortcuts. However, on switching to Ubuntu, a lot of my keyboard shortcuts are gone, and my productivity has resultantly taken a huge hit anytime I am using Ubuntu. Problem 1: My computer was designed to display on-screen notifications for a second when I hit caps-lock or num-lock, and there are no constant indicators of the lock status (LEDs, etc). In Ubuntu, the keys still worked, but the notifications were gone. Googling got me a tutorial on key-binding(Compiz) and scripts, so now I have capslock and numlock running this script: #!/bin/bash icon="/usr/share/icons/gnome/scalable/devices/keyboard.svg" case $1 in 'scrl') mask=3 key="Scroll" ;; 'num') mask=2 key="Num" ;; 'caps') mask=1 key="Caps" ;; esac value=$(xset q | grep "LED mask" | sed -r "s/.*LED mask:\s+[0-9a-fA-F]+([0-9a-fA-F]).*/\1/") if [ $(( 0x$value & 0x$mask )) == $mask ] then output="$key Lock" output2="On" else output="$key Lock" output2="Off" fi notify-send -i $icon "$output" "$output2" -t 1000 But, whether turning it off or on, the notifications always say that I have turned it on. Is there an easy fix for this, or an easier way to work it to get it do display the CORRECT notifications? Problem 2: I'm not sure if this is because of my keyboard or Ubuntu. In Windows, I use Chrome and use the ctrl+pgUp/pgDwn shortcuts quite a bit to switch between tabs. On my keyboard, I can enter pgUp and pgDwn by either disabling NumLock and hitting 9 or 3 respectively on the 10key. Alternately I can hold the fn key and hit up or down arrow. The first method is the one I very heavily relied upon, and it works in Firefox for Ubuntu, but not in Chrome nor in Chromium. The second method (ctrl + fn + up/down) works fine in Chrome. However, I'd dearly like to find a method to make the first method work. Any suggestions? Thanks in advance for your help. Update: @julien: I've checked the keyboard layout options - I didn't find anything that seemed useful for this goal. @Marty: The script runs once when the button is pressed. In Compiz, I've tied those two keys to the script, so when I press the button, it runs the script with the button pressed as a parameter. Update: @elmicha: Thanks. That one works a lot better, and it even pops an icon into the status bar when caps lock is on. There's still a very slight problem in that if I quickly tap the key twice, the image will show that it has been turned on and then turned off, and the notifaction will come and go from my status area, but the text of both notifications will be "Caps Lock on". Same with Num Lock. However, if the time between presses is long enough for the first notification to disappear, everything displays correctly. Given how quickly the notifications disappear, I don't expect this will pose too much of a problem for me.

    Read the article

  • HTC Android device mounted as USB drive is read-only unless I'm root

    - by Ian Dickinson
    When I connect my HTC Incredible S to my Ubuntu 10.10 system as a USB drive, the device seems to mount OK, but is read-only unless I access it as root. For example, if I run nautilus, I can't drag and drop files to the SD-card in the phone, but if I run sudo nautilus I can. I have USB debug support set on the phone (Applications > Development > USB debugging) and I have added a rule for the device in /etc/udev/rules.d/51-android.rules on my Ubuntu system. Any suggestions as to how I can mount the drive so that I can copy content to the SD card without needing to sudo? Update Following advice from waltinator, I added the following line to my /etc/fstab: UUID=3537-3834 /media/usb1 vfat rw,user,noexec,nodev,nosuid,noauto However, the Android device is still being auto-mounted on /media/usb1 with uid and gid root. Update 2 syslog output: Nov 21 23:38:40 rowan-15 usbmount[4352]: executing command: mount -tvfat -osync,noexec,nodev,noatime,nodiratime /dev/sdd1 /media/usb1 Nov 21 23:38:40 rowan-15 usbmount[4352]: executing command: run-parts /etc/usbmount/mount.d

    Read the article

  • Algorithm to infer tag hierarchy

    - by Tom
    I'm looking for an algorithm to infer a hierarchy from a set of tagged items. E.g. if the following items have the tags: 1 a 2 a,b 3 a,c 4 a,c,e 5 a,b 6 a,c 7 d 8 d,f Then I can construct an undirected graph (or graphs) by tallying the node weights and edge weights: node weights edge weights a 6 a-b 2 b 2 a-c 3 c 3 c-e 1 d 2 a-e 1 <-- this edge is parallel to a-c and c-e and not wanted e 1 d-f 1 f 1 The first problem is how to drop any redundant edges to get to the simplified graph? Note that it's only appropriate to remove that redundant a-e edge in this case because something is tagged as a-c-e, if that wasn't the case and the tag was a-e, that edge would have to remain. I suspect that means the removal of edges can only happen during the construction of the graph, not after everything has been tallied up. What I'd then like to do is identify the direction of the edges to create a directed graph (or graphs) and pick out root nodes to hopefully create a tree (or trees): trees a d // \\ | b c f \ e It seems like it could be a string algorithm - longest common subsequences/prefixes - or a tree/graph algorithm, but I am a little stuck since I don't know the correct terminology to search for it.

    Read the article

  • Cat 6 Only 100mbit speed

    - by Stu2000
    I tried two different cat6 cables directly connected between my two ubuntu machines. This one I ordered online: http://www.amazon.co.uk/gp/product/B002SQPDXS/ref=wms_ohs_product only achieves 100mbit speeds, but does appear to be supporting cross-talk (direct pc to pc), the other cat 6 cable, worked perfectly and gets the full 1gigabit speed. Both tests were performed using ftp and checking the network monitor with direct pc to pc connection. Did the product from amazon lie to me or do I need to manually set a setting somewhere in ubuntu for some cables? I had thought 10 quid for 20m of gigabit ethernet cable was a bit cheap, you get what you pay for... Regards, Stu Update: It seems that after rebooting, the device is set to 1000mbit sec when looking it up with sudo ethtool eth0 However after a while, this will drop down to just 100, after which to reset it to 1000 again, I have to reboot, and simply unpugging and re-plugging in the cable doesn't do it. I tried setting this in networking config file as suggested here: auto eth0 iface eth0 inet static pre-up /usr/sbin/ethtool -s eth0 speed 1000 duplex full but that resulted in my networking failing to start. Is there a problem with my 'auto-negotiation' or something? Can I manually override a setting to 1000mbit?

    Read the article

  • Kids and programming: ScratchKara

    - by Mike Pagel
    Ever now and then I kept wondering how to share with my kids the excitement of creating something with your computer. Of course, today this is a bit more difficult, as they have seen 3D animation games and well-edited websites. I guess that's why they weren't all that hyped when I found my first computer model at our local recycling facilities (an 8-bit Laser VZ-200 with rubber keys). When I finally got it up and running with an old analog TV set they finally asked whether we could play soccer on it. Needless to say that my showing them how it remembers some BASIC commands and lists and executes them did not make any impression. So the question is for real: How do you get today's kids excited about programming? And just recently I looked again for environments that allow even young kids (mine are 7 and 9 years old now) to do something and have fun. Obviously any real, text-oriented programming language wouldn't work well. To cut it short: Something really nice was built by University of Oldenburg: ScratchKara. It is the perfect mixture of Kara, a simulation of a little ladybug and Scratch, an authoring environment from MIT. ScratchKara allows kids to initially simply explore how the bug moves and turns by pressing the action buttons, then move towards sequencing commands through drag & drop, and eventually end up building algorithms with procedures and functions. Even through it is built for kids and beginners, the environment comes with debugging and refactoring, which I found more than amazing. My kids love it and I have to admit I keep thinking about how to solve a bit more advanced problems with this language, which does not allow you to store any state information (other than your call stack). Yes, I am hooked, too... Once the language is understood you can then move to one of the original Kara versions, where you can define the bug's behavior through finite statemachines, Turing tables, Java and other textual languages. And from there, anything is possible.

    Read the article

  • Letting search engines know that different links to identical pages stress different parts of the page

    - by balpha
    When you follow a permalink to a chat message in the Stack Exchange chat, you get a view of the transcript page for the day that contains the particular message. This message is highlighted in yellow, and the page is scrolled to its position. Sometimes – admittedly rarely, but it happens – a web search will result in such a transcript link. Here's a (constructed, obviously) example: A Google search for strange behavior of the \bibliography command site:chat.stackexchange.com gives me a link to this chat message. This message is obiously unrelated to my query, but the transcript page does indeed contain my search terms – just in a totally different spot. Both the above links lead to the same content, and Google knows this, since both pages have <link rel="canonical" href="/transcript/41/2012/4/9/0-24" /> in their <head>. The only difference between the two links is Which message has the highlight css class?. Is there a way to let Google know that while all three links have the same content, they put an emphasis on a different part of the content? Note that the permalinks on the transcript page already have a #12345 hash to "point" to the relavant chat message, but Google appears to drop it.

    Read the article

  • Working with Reporting Services Filters–Part 5: OR Logic

    - by smisner
    When you combine multiple filters, Reporting Services uses AND logic. Once upon a time, there was actually a drop-down list for selecting AND or OR between filters which was very confusing to people because often it was grayed out. Now that selection is gone, but no matter. It wouldn’t help us solve the problem that I want to describe today. As with many problems, Reporting Services gives us more than one way to apply OR logic in a filter. If I want a filter to include this value OR that value for the same field, one approach is to set up the filter is to use the IN operator as I explained in Part 1 of this series. But what if I want to base the filter on two different fields? I  need a different solution. Using the AdventureWorksDW2008R2 database, I have a report that lists product sales: Let’s say that I want to filter this report to show only products that are Bikes (a category) OR products for which sales were greater than $1,000 in a year. If I set up the filter like this: Expression Data Type Operator Value [Category] Text = Bikes [SalesAmount]   > 1000 Then AND logic is used which means that both conditions must be true. That’s not the result I want. Instead, I need to set up the filter like this: Expression Data Type Operator Value =Fields!EnglishProductCategoryName.Value = "Bikes" OR Fields!SalesAmount.Value > 1000 Boolean = =True The OR logic needs to be part of the expression so that it can return a Boolean value that we test against the Value. Notice that I have used =True rather than True for the value. The filtered report appears below. Any non-bike product appears only if the total sales exceed $1,000, whereas Bikes appear regardless of sales. (You can’t see it in this screenshot, but Mountain-400-W Silver, 38 has sales of $923 in 2007 but gets included because it is in the Bikes category.)

    Read the article

  • Where Have All the Ugly Forms Gone? Users and ADF Took Care Of It

    - by ultan o'broin
    Sometimes I hear that our application demos are a bit too "cutsey" and that we never talk about with any user roles that have lots of data entry as a requirement. Some (no names) consider those old clunker forms, with the myriad rows of fields, to be super-productive for data clerks. We do have such roles covered in Oracle Fusion Applications for sure. But consider what is really the issue here: productivity. Check out how the Oracle Fusion Financials Applications User Experience team went about designing for productivity when receiving and entering invoice data, for example. See how Fusion Financials caters so well for input and control of data? Central to all this is knowing the users and how they work: what tasks do they need to perform, and when. Read more about Fusion Financials productivity in the white paper, Get It Done Fast, Get It Done Right: The Oracle Fusion Financials User Experience. Now and then, I see forms that weren't designed for end user activity at all. Instead, they were designed by developers or by the IT department around the database schema. Forms with literally dozens of fields on the same page, sometimes. Forms that give the impression there was only task involved, when there may have been several. At times, completing one of these huge forms accurately became so tedious that, under pressure, it made more sense for the user to complete it quickly as possible and then let somebody else check it for accuracy and fill in the gaps from data emailed along in spreadsheet form. Data accuracy is critical in our business. Not good. Not efficient. Not productive. So here are a few basics on forms design for data entry-type user roles. A great place for developers to start exploring what is possible with forms layout is the Rich Client User Experience (RCUX) guidance on Form Layout, using ADF components. User-Centered Forms Design Considerations The starting point--something you must always keep in mind with your own design--is design for the end user. Find a representative end user, and keep that user engaged throughout the design, deployment, and test process. Consider these points in user testing those forms: Are there automated or technical solutions to entering the data that avoid manual input in the first place? For example, imports, uploads, OCR, whatever. Some day we will be able to tell Siri to do it, but leave that for now. Design your form to reflect the task involved (i.e., the business process) and not the database schema. On the form, group like fields together, logically. Eliminate duplicate data entry or prepopulate from previous data entry. Allow users to complete fields in the order they wish (i.e., no interdependency). Allow for tabbing between fields (keyboard is faster than mouse), so know how the browser supports this (see that RCUX guideline). Allow for final validation at the page level not at field-level entry. Way better for heads-down users. For example, ADF messages allow you to see a list of all validation errors on a page on a final submit or navigation action and to easily navigate to the point of error. Better still, be error tolerant. Allow users to enter data in formats they comfortable with. Bind any relevant user preference setting to the input format allowed (for example, the locale date format). Explore what data entry conversion can do for you automatically too (see the ADF converter demos, convenience patterns can also be written). Only ask for data input when it's needed. Get rid of, or hide optional fields. Cut down on the number of mandatory fields, and mark them clearly (use a *). Clearly label the fields in plain language. I am sure you may have a few more tips on forms design for data entry users. Remember the user before finding the comments.

    Read the article

  • Coda-like experience for Ubuntu

    - by Dillon Gilmore
    I'm a web developer who's going to transition from using Mac OS X to Ubuntu. I've been using Coda for some time, only because it makes web development easy. I know a full fledged app isn't available for Linux, but would like to know about apps that specialize in the same tasks that Coda offers. I plan on switching to Vim for code editing, I'm extremely proficient and will install the Janus plugin and be good to go for editing code. One thing that makes editing on Coda so amazing is its extremely good at SFTP, you can drag and drop files and/or folders from your local drive to the server. Also, you can edit code directly on the server. The problem here, is that using Vim I don't know of a way to edit code on a remote server, while using my own Vim settings and plugins. To solve this, I would like to know of a good SFTP client OR a good SFTP CLI. A CLI that could synchronize your files after a file has been modified would be perfect, but not necessary. Now, one of the biggest and best features of Coda is its ability to view your databases. You get to create a database, create tables, add stuff, delete stuff and view the contents of the table (all this without writing a single SQL statement). I will admit that databases are my weak point, but is a very important part of my job. If there is a tool that specializes in databases would be perfect. I wouldn't prefer to use the command line for database stuff, but if there is a CLI for databases that I'm missing could potentially be useful. So I guess I'm asking for two things. A tool that makes databases easier to visualize and a tool that assists in pushing my local code to a server.

    Read the article

  • How to keep your third party libraries up to date?

    - by Joonas Pulakka
    Let's say that I have a project that depends on 10 libraries, and within my project's trunk I'm free to use any versions of those libraries. So I start with the most recent versions. Then, each of those libraries gets an update once a month (on average). Now, keeping my trunk completely up to date would require updating a library reference every three days. This is obviously too much. Even though usually version 1.2.3 is a drop-in replacement for version 1.2.2, you never know without testing. Unit tests aren't enough; if it's a DB / file engine, you have to ensure that it works properly with files that were created with older versions, and maybe vice versa. If it has something to do with GUI, you have to visually inspect everything. And so on. How do you handle this? Some possible approaches: If it ain't broke, don't fix it. Stay with your current version of the library as long as you don't notice anything wrong with it when used in your application, no matter how often the library vendor publishes updates. Small incremental changes are just waste. Update frequently in order to keep change small. Since you'll have to update some day in any case, it's better to update often so that you notice any problems early when they're easy to fix, instead of jumping over several versions and letting potential problems to accumulate. Something in between. Is there a sweet spot?

    Read the article

  • Oracle GoldenGate Active-Active Part 1

    - by Nick_W
    My name is Nick Wagner, and I'm a recent addition to the Oracle Maximum Availability Architecture (MAA) product management team.  I've spent the last 15+ years working on database replication products, and I've spent the last 10 years working on the Oracle GoldenGate product.  So most of my posting will probably be focused on OGG.  One question that comes up all the time is around active-active replication with Oracle GoldenGate.  How do I know if my application is a good fit for active-active replication with GoldenGate?   To answer that, it really comes down to how you plan on handling conflict resolution.  I will delve into topology and deployment in a later blog, but here is a simple architecture: The two most common resolution routines are host based resolution and timestamp based resolution. Host based resolution is used less often, but works with the fewest application changes.  Think of it like this: any transactions from SystemA always take precedence over any transactions from SystemB.  If there is a conflict on SystemB, then the record from SystemA will overwrite it.  If there is a conflict on SystemA, then it will be ignored.  It is quite a bit less restrictive, and in most cases, as long as all the tables have primary keys, host based resolution will work just fine.  Timestamp based resolution, on the other hand, is a little trickier. In this case, you can decide which record is overwritten based on timestamps. For example, does the older record get overwritten with the newer record?  Or vice-versa?  This method not only requires primary keys on every table, but it also requires every table to have a timestamp/date column that is updated each time a record is inserted or updated on the table.  Most homegrown applications can always be customized to include these requirements, but it's a little more difficult with 3rd party applications, and might even be impossible for large ERP type applications.  If your database has these features - whether it’s primary keys for host based resolution, or primary keys and timestamp columns for timestamp based resolution - then your application could be a great candidate for active-active replication.  But table structure is not the only requirement.  The other consideration applies when there is a conflict; i.e., do I need to perform any notification or track down the user that had their data overwritten?  In most cases, I don't think it's necessary, but if it is required, OGG can always create an exceptions table that contains all of the overwritten transactions so that people can be notified. It's a bit of extra work to implement this type of option, but if the business requires it, then it can be done. Unless someone is constantly monitoring this exception table or has an automated process in dealing with exceptions, there will be a delay in getting a response back to the end user. Ideally, when setting up active-active resolution we can include some simple procedural steps or configuration options that can reduce, or in some cases eliminate the potential for conflicts.  This makes the whole implementation that much easier and foolproof.  And I'll cover these in my next blog. 

    Read the article

  • Do ORMs enable the creation of rich domain models?

    - by Augusto
    After using Hibernate on most of my projects for about 8 years, I've landed on a company that discourages its use and wants applications to only interact with the DB through stored procedures. After doing this for a couple of weeks, I haven't been able to create a rich domain model of the application I'm starting to build, and the application just looks like a (horrible) transactional script. Some of the issues I've found are: Cannot navigate object graph as the stored procedures just load the minimum amount of data, which means that sometimes we have similar objects with different fields. One example is: we have a stored procedure to retrieve all the data from a customer, and another to retrieve account information plus a few fields from the customer. Lots of the logic ends up in helper classes, so the code becomes more structured (with entities used as old C structs). More boring scaffolding code, as there's no framework that extracts result sets from a stored procedure and puts it in an entity. My questions are: has anyone been in a similar situation and didn't agree with the store procedure approch? what did you do? Is there an actual benefit of using stored procedures? appart from the silly point of "no one can issue a drop table". Is there a way to create a rich domain using stored procedures? I know that there's the posibility of using AOP to inject DAOs/Repositories into entities to be able to navigate the object graph. I don't like this option as it's very close to voodoo.

    Read the article

  • How to create a deb package that installs a series of files

    - by fossfreedom
    I would like to create a brand new deb package to install series of files. If at all possible, I would like to untar the folder containing these files as part of the installation into a known folder location. Failing that, some knowledge how to package the source folders and files would be very useful. Question is - is this possible and if so - how? Lets give an example: ~/mypluginfolder/ contains the files x, y, a subfolder called abc and inside that another file called z. I want to tar this folder: tar -cvf myfiles.tar ~/mypluginfolder I presume my debian package would look like myfiles.tar.gz myfiles+ppafoss_0.1-1/ myfiles.tar DEBIAN changelog, compat, control, install, rules source Is it possible to somehow untar myfiles.tar to a known folder location for example /usr/share/rhythmbox/plugins/ Thus the final result would be: /usr/share/rhythmbox/plugins/mypluginfolder /usr/share/rhythmbox/plugins/mypluginfolder\x /usr/share/rhythmbox/plugins/mypluginfolder\y /usr/share/rhythmbox/plugins/mypluginfolder\abc\z If - presuming launchpad needs source, advice is sought as to where I should drop the source folders and files into the deb package structure. This will eventually will become a series of individual launchpad PPA packages. What I prefer (but may not be able to achieve...) is to keep my packaging to a minimum - create a series of packages from a template and adjust the bare minimum (changelog etc + the tar file/file & folder structure).

    Read the article

  • Is there ever a reason to do all an object's work in a constructor?

    - by Kane
    Let me preface this by saying this is not my code nor my coworkers' code. Years ago when our company was smaller, we had some projects we needed done that we did not have the capacity for, so they were outsourced. Now, I have nothing against outsourcing or contractors in general, but the codebase they produced is a mass of WTFs. That being said, it does (mostly) work, so I suppose it's in the top 10% of outsourced projects I've seen. As our company has grown, we've tried to take more of our development in house. This particular project landed in my lap so I've been going over it, cleaning it up, adding tests, etc etc. There's one pattern I see repeated a lot and it seems so mindblowingly awful that I wondered if maybe there is a reason and I just don't see it. The pattern is an object with no public methods or members, just a public constructor that does all the work of the object. For example, (the code is in Java, if that matters, but I hope this to be a more general question): public class Foo { private int bar; private String baz; public Foo(File f) { execute(f); } private void execute(File f) { // FTP the file to some hardcoded location, // or parse the file and commit to the database, or whatever } } If you're wondering, this type of code is often called in the following manner: for(File f : someListOfFiles) { new Foo(f); } Now, I was taught long ago that instantiated objects in a loop is generally a bad idea, and that constructors should do a minimum of work. Looking at this code it looks like it would be better to drop the constructor and make execute a public static method. I did ask the contractor why it was done this way, and the response I got was "We can change it if you want". Which was not really helpful. Anyway, is there ever a reason to do something like this, in any programming language, or is this just another submission to the Daily WTF?

    Read the article

  • Making LISPs manageable

    - by Andrea
    I am trying to learn Clojure, which seems a good candidate for a successful LISP. I have no problem with the concepts, but now I would like to start actually doing something. Here it comes my problem. As I mainly do web stuff, I have been looking into existing frameworks, database libraries, templating libraries and so on. Often these libraries are heavily based on macros. Now, I like very much the possibility of writing macros to get a simpler syntax than it would be possible otherwise. But it definitely adds another layer of complexity. Let me take an example of a migration in Lobos from a blog post: (defmigration add-posts-table (up [] (create clogdb (table :posts (integer :id :primary-key ) (varchar :title 250) (text :content ) (boolean :status (default false)) (timestamp :created (default (now))) (timestamp :published ) (integer :author [:refer :authors :id] :not-null)))) (down [] (drop (table :posts )))) It is very readable indeed. But it is hard to recognize what the structure is. What does the function timestamp return? Or is it a macro? Having all this freedom of writing my own syntax means that I have to learn other people's syntax for every library I want to use. How can I learn to use these components effectively? Am I supposed to learn each small DSL as a black box?

    Read the article

  • Heterogeneous Datacenter Management with Enterprise Manager 12c

    - by Joe Diemer
    The following is a Guest Blog, contributed by Bryce Kaiser, Product Manager at Blue MedoraWhen I envision a perfect datacenter, it would consist of technologies acquired from a single vendor across the entire server, middleware, application, network, and storage stack - Apps to Disk - that meets your organization’s every IT requirement with absolute best-of-breed solutions in every category.   To quote a familiar motto, your datacenter would consist of "Hardware and Software, Engineered to Work Together".  In almost all cases, practical realities dictate something far less than the IT Utopia mentioned above.   You may wish to leverage multiple vendors to keep licensing costs down, a single vendor may not have an offering in the IT category you need, or your preferred vendor may quite simply not have the solution that meets your needs.    In other words, your IT needs dictate a heterogeneous IT environment.  Heterogeneity, however, comes with additional complexity. The following are two pretty typical challenges:1) No End-to-End Visibility into the Enterprise Wide Application Deployment. Each vendor solution which is added to an infrastructure may bring its own tooling creating different consoles for different vendor applications and platforms.2) No Visibility into Performance Bottlenecks. When multiple management tools operate independently, you lose diagnostic capabilities including identifying cross-tier issues with database, hung-requests, slowness, memory leaks and hardware errors/failures causing DB/MW issues. As adoption of Oracle Enterprise Manager (EM) has increased, especially since the release of Enterprise Manager 12c, Oracle has seen an increase in the number of customers who want to leverage their investments in EM to manage non-Oracle workloads.  Enterprise Manager provides a single pane of glass view into their entire datacenter.  By creating a highly extensible framework via the Oracle EM Extensibility Development Kit (EDK), Oracle has provided the tooling for business partners such as my company Blue Medora as well as customers to easily fill gaps in the ecosystem and enhance existing solutions.  As mentioned in the previous post on the Enterprise Manager Extensibility Exchange, customers have access to an assortment of Oracle and Partner provided solutions through this Exchange, which is accessed at http://www.oracle.com/goto/emextensibility.  Currently, there are over 80 Oracle and partner provided plug-ins across the EM 11g and EM 12c versions.  Blue Medora is one of those contributing partners, for which you will find 3 of our solutions including our flagship plugin for VMware.  Let's look at Blue Medora’s VMware plug-in as an example to what I'm trying to convey.  Here is a common situation solved by true visibility into your entire stack:Symptoms•    My database is bogging down, however the database appears okay internally.  Maybe it’s starved for resources?•    My OS tooling is showing everything is “OK”.  Something doesn’t add up. Root cause•    Through the VMware plugin we can see the problem is actually on the virtualization layer Solution•    From within Enterprise Manager  -- the same tool you use for all of your database tuning -- we can overlay the data of the database target, host target, and virtual machine target for a true picture of the true root cause. Here is the console view: Perhaps your monitoring conditions are more specific to your environment.  No worries, Enterprise Manager still has you covered.  With Metric Extensions you have the “Next Generation” of User-Defined Metrics, which easily bring the power of your existing management scripts into a single console while leveraging the proven Enterprise Manager framework. Simply put, Oracle Enterprise manager boasts a growing ecosystem that provides the single pane of glass for your entire datacenter from the database and beyond.  Bryce can be contacted at [email protected]

    Read the article

  • A quick hello to the Western Kentucky .NET User Group

    - by Muljadi Budiman
    A few days back, I got a chance to speak at the Western Kentucky .NET User Group meeting in Murray, Kentucky.  The opportunity came up because the original speaker, Jeff Blankenburg, had another obligation and was thus unable to come to this meeting.  I volunteered to deliver his presentation, which is an overview of MIX10 conference. It was a great experience for me; got to drive around and do a little bit of sight-seeing – can’t say I’ve ever been to Kentucky before, so first trip ever there.  I got to meet the user group’s current lead, Tom Turner and got to chat and discuss about all kinds of stuff with the other members.  Cheers to Matt Gawarecki and Brandon Sharp! The presentation itself mostly covers new features in Visual Studio 2010, which was recently released on April 12 – got to demonstrate Historical Debugging in IntelliTrace, Parallel Stacks, View Call Hierarchy and show some Extensions.  We also covered some of the new functionalities in Silverlight 4 (using webcams, drag & drop support among others) and I got to show off Scott Guthrie’s Windows Phone 7 Twitter app.  Altogether, it was quite a bit to cover in 70 minutes or so, but I think everyone enjoyed it. Jeff provided me with the presentation slides (which I modify a bit) and demo applications; so I’m putting it up here for those that may be interested in downloading them.  Please keep in mind that all the demos were made with VS2010 RC, so there may be slight tweaks to get it to work on the RTM version.

    Read the article

  • Sitting Pretty

    - by Phil Factor
    Guest Editorial for Simple-Talk IT Pro newsletter'DBAs and SysAdmins generally prefer an expression of calmness under adversity. It is a subtle trick, and requires practice in front of a mirror to get it just right. Too much adversity and they think you're not coping; too much calmness and they think you're under-employed' I dislike the term 'avatar', when used to describe a portrait photograph. An avatar, in the sense of a picture, is merely the depiction of one's role-play alter-ego, often a ridiculous bronze-age deity. However, professional image is important. The choice and creation of online photos has an effect on the way your message is received and it is important to get that right. It is fine to use that photo of you after ten lagers on holiday in an Ibiza nightclub, but what works on Facebook looks hilarious on LinkedIn. My splendid photograph that I use online was done by a professional photographer at great expense and I've never had the slightest twinge of regret when I remember how much I paid for it. It is me, but a more pensive and dignified edition, oozing trust and wisdom. One gasps at the magical skill that a professional photographer can conjure up, without digital manipulation, to make the best of a derisory noggin (ed: slang for a head). Even if he had offered to depict me as a semi-naked, muscle-bound, sword-wielding hero, I'd have demurred. No, any professional person needs a carefully cultivated image that looks right. I'd never thought of using that profile shot, though I couldn't help noticing the photographer flinch slightly when he first caught sight of my face. There is a problem with using an avatar. The use of a single image doesn't express the appropriate emotion. At the moment, it is weird to see someone with a laughing portrait writing something solemn. A neutral cast to the face, somewhat like a passport photo, is probably the best compromise. Actually, the same is true of a working life in IT. One of the first skills I learned was not to laugh at managers, but, instead, to develop a facial expression that promoted a sense of keenness, energy and respect. Every profession has its own preferred facial cast. A neighbour of mine has the natural gift of a face that displays barely repressed grief. Though he is characteristically cheerful, he earns a remarkable income as a pallbearer. DBAs and SysAdmins generally prefer an expression of calmness under adversity. It is a subtle trick, and requires practice in front of a mirror to get it just right. Too much adversity and they think you're not coping; too much calmness and they think you're under-employed. With an appropriate avatar, you could do away with a lot of the need for 'smilies' to give clues as to the meaning of what you've written on forums and blogs. If you had a set of avatars, showing the full gamut of human emotions expressible in writing: Rage, fear, reproach, joy, ebullience, apprehension, exasperation, dissembly, irony, pathos, euphoria, remorse and so on. It would be quite a drop-down list on forums, but given the vast prairies of space on the average hard drive, who cares? It would cut down on the number of spats in Forums just as long as one picks the right avatar. As an unreconstructed geek, I find it hard to admit to the value of image in the workplace, but it is true. Just as we use professionals to tidy up and order our CVs and job applications, we should employ experts to enhance our professional image. After all you don't perform surgery or dentistry on yourself do you?

    Read the article

  • Lack of PHP experience = reduced compensation?

    - by Jake Tacholsavacky
    I interviewed for a C++ position at this company, and while they say that I knocked my interview out of the park, they also say that I am too senior for the position. Instead they would like to offer me a job programming in PHP. Unfortunately my PHP experience is limited. Because of this, they are expecting me to take a significant drop in salary with the hope that I will master PHP within a year and be promoted at my first annual performance review. I don't believe that this is reasonable. I agree with Joel when he writes, "Don’t look for experience with particular technologies." If they thought I was such a superstar, then they should have realized that I am capable of being productive in PHP within weeks, not a year! It seems like just an excuse to underpay me. Notwithstanding the fact that I was insulted by this offer, I think that I would be taking on much more risk than they would be; they won't guarantee what my post-PHP salary would be. What does the Stack Exchange community think about this offer? Would you take it?

    Read the article

< Previous Page | 454 455 456 457 458 459 460 461 462 463 464 465  | Next Page >