Search Results

Search found 4580 results on 184 pages for 'faster'.

Page 101/184 | < Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >

  • Do large number of internal broken links affect SEO?

    - by TheBigK
    We've a WordPress blog and had disqus plugin in stalled for several months. Around late August this year, the plugin created a ton of URLs that linked to non-existent location on our website. For example - Correct URL: domain.com/correct-URL/ Disqus created - domain.com/correct-URL/344322/ - Throws 404 domain.com/correct-URL/433466/ - Throws 404 So essentially, Google found a LARGE number of broken links that pointed to unknown locations on our own domain. As the count of those errors (404) rose, our site suffered massive drop in traffic and crawl rate dropped to 10% of what it was earlier. I wish to know - Can large number of (we've over 99k of them) internal broken links cause rankings to drop? I've fixed the issue in one go by creating 301 redirects for each bad URL to correct URL and removing disqus. Google however drops the count by ~1000 daily, as I mark errors as 'fixed' in Google Webmaster Tools. Is there any way to speed this up? Should I setup custom crawl rate to 'Fast' in GWT to make Google crawl our website faster? I'd appreciate your inputs and experience sharing.

    Read the article

  • 12.10 upgrade broke brightness keys [closed]

    - by Chris Morgan
    I have been running Ubuntu (64-bit) on my HP 6710b laptop (Core 2 Duo with integrated graphics) for several years, and the backlight brightness keys have always worked. Since I upgraded to Ubuntu 12.10 earlier today, those keys do not work any more. The secondary function keys: Fn+F3: sleep; still works (and considerably faster than ever before!) Fn+F8: battery info; still works Fn+F9: reduce brightness; stopped working in 12.10 Fn+F10: increase brightness; stopped working in 12.10 It may also be worth while mentioning that X does not appear to be receiving the brightness events at all, or at least not sending them out further. (This I detected with a key logger I wrote for a Uni project, which uses X's Record extension; it is informed of the sleep and battery info keystrokes, but doesn't receive the brightness ones at all.) In the mean time, I know that I can use the Brightness & Lock settings screen to alter the brightness. (Wow! I can suddenly make my backlight darker than I could before—I can go right down to turning the backlight off, something I couldn't do before... but this model has a fairly dim screen, so I don't expect to use that much, if ever.) How can I get the brightness keys working again? This question is probably strongly related to I can't control my Brightness in HP Compaq 6710s.

    Read the article

  • Blurry printed raster images with Brother MFC-8840D

    - by Adam Monsen
    (NOTE: crossposted here: ubuntuforums.org/showthread.php?t=1621795) I've got a Brother MFC-8840D. Works great with Ubuntu server! Setting up a CUPS print server was pretty straightforward, and I also finally got network scanning working reliably with saned. Printing documents and Web pages works well: fonts are crisp/clear, etc. One issue has got me completely vexed: printing raster (ie: JPG) images. They are blurry. For example, I can scan a page of black and white text at 150 or 300 dpi. The grayscale image looks perfect on my monitor. But the printed version is much blurrier than the original, regardless of the "print resolution" dpi I choose. As a counterexample, if I use the "copy" function of the MFC-8840D, the copy looks excellent, and this function is much, much faster than if I scan then print a scan of same. I've googled around a bunch and tried different tricks (printing a PDF with the image from evince, printing with Gimp, EOG and other applications) but I just can't print anything that looks as good as a copy made with the MFC-8840D. Any ideas? I'm using Ubuntu 10.04.1 LTS server. I'm using the PPD file from solutions.brother.com. Thanks, -Adam

    Read the article

  • VBScript and Xpath excluding duplicates [closed]

    - by Malachi
    I am trying to pull names from an XML Document using a vbscript. XML Document structure <Aliases> <Alias PartyType="DF" CaseID="000000" NameType=""> Name Name</Alias> <Alias PartyType="DF" CaseID="000000" NameType=""> Name Name</Alias> <Alias PartyType="DF" CaseID="000000" NameType=""> Name Name</Alias> ... </Aliases> the XML File might have 100 rows with the same name coming from several different CaseID's because for this part of my vbscript I am trying to pull all the different Names from all cases, but here is the issue, I don't want to return duplicates. is there a way to do this with an xPath Expression or should I try to do this with VBScript? EDIT I am pretty sure that I am going to have to do this with VBScript. Would it be Faster and more efficient to solve this issue in VBScript, xPath, or in populating the XML I am retrieving information from ( this might prove more difficult than the other two options ) I am also asking a Similar question on stackoverflow

    Read the article

  • Ubuntu running like mud, system hangs and looks like its running at 3FPS

    - by user240803
    the system specks: AMD XP 3200+ 1GB DDR 333 RAM 160 GB HD IDE NVIDIA FX 5500 AGP Video Card Compaq Presario sr1230nx the system takes forever to boot and when it does it runs like total mud, reminds me of an overloaded system that has too many windows open or something... fresh install tried soo many thing like new memory (it had 512 stick) a new video card (onboard 8mb sis sounded like the problem, but wasn't... has gotten a little faster now but not by much) tried to disable all the things on the motherboard that could be, with no help... this machine runs windows XP, 7, and 8 JUST FINE!!! I mean for a single core CPU WIN8 runs AWESOME!!! BUT I already have a Gaming Desktop that has Windows 8 pro I want a Linux machine to get some time in and learn a few things... I want Ubuntu because of the Software center so I can install things I want until I am familiar with the command line.. I've worked on Computers since I was 12 I remember some of the DOS commands but I guess these are a little different... anyway any ideas? Ive also tried both drivers for the NVIDA card and that didn't help either... its not the card since it did this with both the NVIDA card and the SIS onboard... it also does this on live mode with the USB so I don't think its the HardDrive... I'm running out of options of hardware to try... I know this version of Linux works cuz Ive booted it on other machines and it ran great... what is with this Compaq? here is a vid of exactly what its doing... let me know if you need anything else I am right by the comptuer tonight so ask anything... http://youtu.be/-P-XNo81098

    Read the article

  • Making efficeint voxel engines using "chunks"

    - by Wardy
    Concept I'm currently looking in to how voxel engines work with a view to possibly making one myself. I see a lot of stuff like this ... https://sites.google.com/site/letsmakeavoxelengine/home/chunks ... which talks about how to go about reducing the draw calls. What I can't seem to understand is how it actually saves draw call counts on the basis of the logic being something like this ... Without chunks foreach voxel in myvoxels DrawIfVisible() With Chunks foreach chunk in mychunks DrawIfVisible() which then does ... foreach voxel in myvoxels DrawIfVisible() So surely you saved nothing ?!?! You still make a draw call for each visible voxel do you not? A visible voxel needs a draw call in either scenario. The only real saving I can see is that the logic that evaluates a chunk will be able to determine if a large number of voxels are visible or not effectively saving a bit of "is this chunk visible" cpu time. But it's the draw calls that interest me ... The fewer of those, the faster the application. EDIT: In case it makes any difference I will probably be using XNA (DX not OpenGL) for my engine so don't consider my choice of example in the link above my choice of technology. But this question is such that I doubt it would matter.

    Read the article

  • Oracle R Distribution 2-13.2 Update Available

    - by Sherry LaMonica
    Oracle has released an update to the Oracle R Distribution, an Oracle-supported distribution of open source R. Oracle R Distribution 2-13.2 now contains the ability to dynamically link the following libraries on both Windows and Linux: The Intel Math Kernel Library (MKL) on Intel chips The AMD Core Math Library (ACML) on AMD chips To take advantage of the performance enhancements provided by Intel MKL or AMD ACML in Oracle R Distribution, simply add the MKL or ACML shared library directory to the LD_LIBRARY_PATH system environment variable. This automatically enables MKL or ACML to make use of all available processors, vastly speeding up linear algebra computations and eliminating the need to recompile R.  Even on a single core, the optimized algorithms in the Intel MKL libraries are faster than using R's standard BLAS library. Open-source R is linked to NetLib's BLAS libraries, but they are not multi-threaded and only use one core. While R's internal BLAS are efficient for most computations, it's possible to recompile R to link to a different, multi-threaded BLAS library to improve performance on eligible calculations. Compiling and linking to R yourself can be involved, but for many, the significantly improved calculation speed justifies the effort. Oracle R Distribution notably simplifies the process of using external math libraries by enabling R to auto-load MKL or ACML. For R commands that don't link to BLAS code, taking advantage of database parallelism using embedded R execution in Oracle R Enterprise is the route to improved performance. For more information about rebuilding R with different BLAS libraries, see the linear algebra section in the R Installation and Administration manual. As always, the Oracle R Distribution is available as a free download to anyone. Questions and comments are welcome on the Oracle R Forum.

    Read the article

  • How to store bitmaps in memory?

    - by Geotarget
    I'm working with general purpose image rendering, and high-performance image processing, and so I need to know how to store bitmaps in-memory. (24bpp/32bpp, compressed/raw, etc) I'm not working with 3D graphics or DirectX / OpenGL rendering and so I don't need to use graphics card compatible bitmap formats. My questions: What is the "usual" or "normal" way to store bitmaps in memory? (in C++ engines/projects?) How to store bitmaps for high-performance algorithms, such that read/write times are the fastest? (fixed array? with/without padding? 24-bpp or 32-bpp?) How to store bitmaps for applications handling a lot of bitmap data, to minimize memory usage? (JPEG? or a faster [de]compression algorithm?) Some possible methods: Use a fixed packed 24-bpp or 32-bpp int[] array and simply access pixels using pointer access, all pixels are allocated in one continuous memory chunk (could be 1-10 MB) Use a form of "sparse" data storage so each line of the bitmap is allocated separately, reusing more memory and requiring smaller contiguous memory segments Store bitmaps in its compressed form (PNG, JPG, GIF, etc) and unpack only when its needed, reducing the amount of memory used. Delete the unpacked data if its not used for 10 secs.

    Read the article

  • Breaking up classes and methods into smaller units

    - by micahhoover
    During code reviews a couple devs have recommended I break up my methods into smaller methods. Their justification was (1) increased readability and (2) the back trace that comes back from production showing the method name is more specific to the line of code that failed. There may have also been some colorful words about functional programming. Additionally I think I may have failed an interview a while back because I didn't give an acceptable answer about when to break things up. My inclination is that when I see a bunch of methods in a class or across a bunch of files, it isn't clear to me how they flow together, and how many times each one gets called. I don't really have a good feel for the linearity of it as quickly just by eye-balling it. The other thing is a lot of people seem to place a premium of organization over content (e.g. 'Look at how organized my sock drawer is!' Me: 'Overall, I think I can get to my socks faster if you count the time it took to organize them'). Our business requirements are not very stable. I'm afraid that if the classes/methods are very granular it will take longer to refactor to requirement changes. I'm not sure how much of a factor this should be. Anyway, computer science is part art / part science, but I'm not sure how much this applies to this issue.

    Read the article

  • Build vs Rebuild

    - by prash
    Build means compile and link only the source files that have changed since the last build, while Rebuild means compile and link all source files regardless of whether they changed or not. Build is the normal thing to do and is faster. Sometimes the versions of project target components can get out of sync and rebuild is necessary to make the build successful. In practice, you never need to Clean. Build or Rebuild Solution builds or rebuilds all projects in the your solution, while Build or Rebuild <project name> builds or rebuilds the StartUp project. To set the StartUp project, right click on the desired project name in the Solution Explorer tab and select Set as StartUp project. The project name now appears in bold. Compile just compiles the source file currently being edited. Useful to quickly check for errors when the rest of your source files are in an incomplete state that would prevent a successful build of the entire project. Ctrl-F7 is the shortcut key for Compile. All source files that have changed are saved when you request a build/rebuild, so you don't have to save them first. When you run your executable (F5 or Ctrl-F5), Visual Studio saves all your changed source files and builds anything that changed, so you don't need to explicitly do those steps every time. This allows for quick "trial and error" debugging. Incidentally, if you like those little Visual Studio keyboard shortcuts, you can download posters of the C# and the VB.Net ones, respectively (I am personally a big fan of using keyboard shortcuts :) ).   Visual Studio 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=92ced922-d505-457a-8c9c-84036160639f   Visual Studio 2005 C#: http://www.microsoft.com/downloads/details.aspx?FamilyID=c15d210d-a926-46a8-a586-31f8a2e576fe&DisplayLang=en VB.NET: http://www.microsoft.com/downloads/details.aspx?FamilyID=6bb41456-9378-4746-b502-b4c5f7182203&DisplayLang=en

    Read the article

  • Is it conceivable to have millions of lists of data in memory in Python?

    - by Codemonkey
    I have over the last 30 days been developing a Python application that utilizes a MySQL database of information (specifically about Norwegian addresses) to perform address validation and correction. The database contains approximately 2.1 million rows (43 columns) of data and occupies 640MB of disk space. I'm thinking about speed optimizations, and I've got to assume that when validating 10,000+ addresses, each validation running up to 20 queries to the database, networking is a speed bottleneck. I haven't done any measuring or timing yet, and I'm sure there are simpler ways of speed optimizing the application at the moment, but I just want to get the experts' opinions on how realistic it is to load this amount of data into a row-of-rows structure in Python. Also, would it even be any faster? Surely MySQL is optimized for looking up records among vast amounts of data, so how much help would it even be to remove the networking step? Can you imagine any other viable methods of removing the networking step? The location of the MySQL server will vary, as the application might well be run from a laptop at home or at the office, where the server would be local.

    Read the article

  • How to sell logistical procedures that require less time to perform but more finesse?

    - by foampile
    I am working with a group where part of the responsibilities is managing a certain set of configuration files which, of course, have the same skeleton/structure across different environments but different values (like server, user, this setting, that setting etc.). Pretty classic scenario... The problem is that everyone just goes and modifies final, environment-specific files and basically repeats the work for every environment. Personally, I am offended to have to peform repeatable, mundane tasks in this day and age when we have technologies to automate it all. So I devised a very simple procedure of abstracting the files into templates, stubbing env-specific values with parameters and then wrote a simple Perl script that, given a template and an environment matrix with env-specific values for each param, produces the final file. So this is nothing special, cutting-edge or revolutionary -- I am pretty sure that 20 years ago efficient places did their CM like that. However, that requires that changes are made at the template level and then distributed across different environments using the script and not making changes in the final environment-specific files. This is where I am encountering resentment as they feel "comfortable" doing it their old, manual, repeated labor way. Personally, I don't have a problem with them working hard rather than smart but the problem is when I have to build on top of someone else's changes, I have to merge their changes into my template from a specific file, which takes time and is grueling. So my question is how to go about selling my method, which makes it so much faster in an environment that is resentful to change and where most things have to be done at the level of the least competent team member?

    Read the article

  • Two Cloudy Observations from Oracle OpenWorld

    - by GeneEun
    Now that the dust has settled from another amazing Oracle OpenWorld, I wanted to reflect back on a couple of key observations I made during the event. First, it was pretty clear that Cloud was again a big deal at this year's conference. Yes, the Oracle Database 12c announcement was also huge, but for most it was hard to not notice that Oracle continues to be "all-in" with respect to cloud computing. Just to give you an idea of the emphasis on Cloud, there were over 300 Cloud-related sessions at this year's OpenWorld. If you caught some of the demo booths in the Oracle Red Lounge, then you saw some of the great platform, application, and social services that are now part of Oracle Cloud, as well as numerous demos of private cloud products that Oracle offers. Second, during Thomas Kurian's keynote presentation on Oracle Cloud, he announced the Preview Availability of a new service called Oracle Developer Cloud Service. This new platform service will provide developers with instant access to environments to better manage the application development lifecycle in the cloud. It provides development project teams access to favorite tools like Hudson, Git, Github, wikis, and tasks to help make innovation faster, more collaborative, and more effective. There's also integration with IDEs like Eclipse, NetBeans, and JDeveloper. If you're a developer, it's an awesome addition to Oracle Cloud's platform services! Want more details about Oracle Developer Cloud Service? Click here.

    Read the article

  • Most efficient Implementation a Tree in C++

    - by Topo
    I need to write a tree where each element may have any number of child elements, and because of this each branch of the tree may have any length. The tree is only going to receive elements at first and then it is going to use exclusively for iterating though it's branches in no specific order. The tree will have several million elements and must be fast but also memory efficient. My plan makes a node class to store the elements and the pointers to its children. When the tree is fully constructed, it would be transformed it to an array or something faster and if possible, loaded to the processor's cache. Construction and the search on the tree are two different problems. Can I focus on how to solve each problem on the best way individually? The construction of has to be as fast as possible but it can use memory as it pleases. Then the transformation into a format that give us speed when iterating the tree's branches. This should preferably be an array to avoid going back and forth from RAM to cache in each element of the tree. So the real question is which is the structure to implement a tree to maximize insert speed, how can I transform it to a structure that gives me the best speed and memory?

    Read the article

  • Attributes of an Ethical Programmer?

    - by ahmed
    Software that we write has ramifications in the real world. If not, it wouldn't be very useful. Thus, it has the potential to sweep across the world faster than a deadly manmade virus or to affect society every bit as much as genetic manipulation. Maybe we can't see how right now, but in the future our code will have ever-greater potential for harm or good. Of course, there's the issue of hacking. That's clearly a crime. Or is it that clear? Isn't hacking acceptable for our government in the event of national security? What about for other governments? Cases of life-and-death emergency? Tracking down deadbeat parents? Screening the genetic profile of job candidates? Where is the line drawn? Who decides? Do programmers have responsibility for how their code is used? What if a programmer writes code to pry into confidential information or copy-protected material? Does he bear responsibility along with the person who used the program? What about a programmer who knowingly or unknowingly writes code to "fix the books?" Should he be liable?

    Read the article

  • Platinum Services – The Highest Level of Service in the Industry

    - by cwarticki
    Oracle Platinum Services provides remote fault monitoring with faster response times and patch deployment services to qualified Oracle Premier Support customers – at no additional cost. We know that disruptions in IT systems availability can seriously impact business performance. That’s why we engineer our hardware and software to work together. Oracle engineered systems are pre-integrated to reduce the cost and complexity of IT infrastructures while increasing productivity and performance. And now, customers who choose the extreme performance of Oracle engineered systems have the power to access the added support they need – Oracle Platinum Services – to further optimize for high availability at no additional cost.  In addition to receiving the complete support essentials with Oracle Premier Support, qualifying Oracle Platinum Services customers also receive: •     24/7 Oracle remote fault monitoring •    Industry-leading response and restore times o   5-Minute Fault Notification o   15-Minute Restoration or Escalation to Development o   30-Minute Joint Debugging with Development •    Update and patch deployment Visit us online to learn more about how to get Oracle Platinum Services

    Read the article

  • Responsive Design: Which Framework Should I Use? CSS3 & HTML5

    - by Jayhal
    I've been looking for a suitable set of HTML5/CSS3 foundation files to start new projects on. I started off piecing together my own files, but I believe I might be better served in finding a solid and fairly compatible(with me) CSS3/HTML5 framework and then tweaking certain things that may not best suit my own process. I'd love to find something that is responsive and that includes aspects focusing on layout, type(hor and vert baselines), form and interface components, cross-browser issues, and preferably built on something other than a just imple css reset, but that does include rebuilding elements consistently across browsers for a clean work slate. Extra features like polyfills or others area great, as is good documentation and examples. So far, off the top of my head I know of, Skeleton 1140 Grid 320 & Up (plus BP) HTML5 Boilerplate 2.0 and Mobile Inuit.css Less Framework Fluir Perkins.Less A few WP themes Are there any great one I don't know about? I work a lot in WP, and something that is easily incorporated (but also stand alone) is ideal. Plugins and wide set feature while maintaining the ability to cut it down when needed(flexibility) is also a big plus, and in par with a faster learning, since I want to start using whatever I find immediately . What are some of the better options you guys might be able to recommend? Systems or scripts, plugins, and other related tools are also welcome, Thanks!

    Read the article

  • Question regarding Readability vs Processing Time

    - by Jordy
    I am creating a flowchart for a program with multiple sequential steps. Every step should be performed if the previous step is succesful. I use a c-based programming language so the lay-out would be something like this: METHOD 1: if(step_one_succeeded()) { if(step_two_succeeded()) { if(step_three_succeeded()) { //etc. etc. } } } If my program would have 15+ steps, the resulting code would be terribly unfriendly to read. So I changed my design and implemented a global errorcode that I keep passing by reference, make everything more readable. The resulting code would be something like this: METHOD 2: int _no_error = 0; step_one(_no_error); if(_no_error == 0) step_two(_no_error); if(_no_error == 0) step_three(_no_error); if(_no_error == 0) step_two(_no_error); The cyclomatic complexibility stays the same. Now let's say there are N number of steps. And let's assume that checking a condition is 1 clock long and performing a step doesn't take up time. The processing speed of Method1 can be anywhere between 1 and N. The processing speed of Method2 however is always equal to N-1. So Method1 will be faster most of the time. Which brings me to my question, is it bad practice to sacrifice time in order to make the code more readable? And why (not)?

    Read the article

  • Copy to USB memory stick really slow?

    - by Eloff
    When I copy files to the USB device, it takes much longer than in windows (same usb device, same port) it's faster than USB 1.0 speeds (1MB/s) but much slower than USB 2.0 speeds (12MB/s). To copy 1.8GB takes me over 10 minutes (it should be < 3 min.) I have two identical SanDisk Cruzer 8GB sticks, and I have the same problem with both. I have a super talent 32GB USB SSD in the neighboring port and it works at expected speeds. The problem I seem to see in the GUI is that the progress bar goes to 90% almost instantly, completes to 100% a little slower and then hangs there for 10 minutes. Interrupting the copy at this point seems to result in corruption at the tail end of the file. If I wait for it to complete the copy is successful. Any ideas? dmesg output below: [64059.432309] usb 2-1.2: new high-speed USB device number 5 using ehci_hcd [64059.526419] scsi8 : usb-storage 2-1.2:1.0 [64060.529071] scsi 8:0:0:0: Direct-Access SanDisk Cruzer 1.14 PQ: 0 ANSI: 2 [64060.530834] sd 8:0:0:0: Attached scsi generic sg4 type 0 [64060.531925] sd 8:0:0:0: [sdd] 15633408 512-byte logical blocks: (8.00 GB/7.45 GiB) [64060.533419] sd 8:0:0:0: [sdd] Write Protect is off [64060.533428] sd 8:0:0:0: [sdd] Mode Sense: 03 00 00 00 [64060.534319] sd 8:0:0:0: [sdd] No Caching mode page present [64060.534327] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.537988] sd 8:0:0:0: [sdd] No Caching mode page present [64060.537995] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.541290] sdd: sdd1 [64060.544617] sd 8:0:0:0: [sdd] No Caching mode page present [64060.544619] sd 8:0:0:0: [sdd] Assuming drive cache: write through [64060.544621] sd 8:0:0:0: [sdd] Attached SCSI removable disk

    Read the article

  • Pros and Cons of Facebook's React vs. Web Components (Polymer)

    - by CletusW
    What are the main benefits of Facebook's React over the upcoming Web Components spec and vice versa (or perhaps a more apples-to-apples comparison would be to Google's Polymer library)? According to this JSConf EU talk and the React homepage, the main benefits of React are: Decoupling and increased cohesion using a component model Abstraction, Composition and Expressivity Virtual DOM & Synthetic events (which basically means they completely re-implemented the DOM and its event system) Enables modern HTML5 event stuff on IE 8 Server-side rendering Testability Bindings to SVG, VML, and <canvas> Almost everything mentioned is being integrated into browsers natively through Web Components except this virtual DOM concept (obviously). I can see how the virtual DOM and synthetic events can be beneficial today to support old browsers, but isn't throwing away a huge chunk of native browser code kind of like shooting yourself in the foot in the long term? As far as modern browsers are concerned, isn't that a lot of unnecessary overhead/reinventing of the wheel? Here are some things I think React is missing that Web Components will care of. Correct me if I'm wrong. Native browser support (read "guaranteed to be faster") Write script in a scripting language, write styles in a styling language, write markup in a markup language. Style encapsulation using Shadow DOM React instead has this, which requires writing CSS in JavaScript. Not pretty. Two-way binding

    Read the article

  • ADF & Fusion Development Webcast–December 11th 2012

    - by JuergenKress
    Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staff. Oracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF's integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications. Join us at this FREE virtual event and learn the latest in Fusion Development including: Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle ADF development with Eclipse Oracle WebCenter Portal and ADF Development Application Lifecycle Management with ADF Building Process Centric Applications with ADF and BPM Oracle Business Intelligence and ADF Integration Live Q&A chats with Oracle technical staff Developer lead, manager or architect – this event has something for everyone. Don't miss this opportunity. For details and registration please click here. View Session Abstracts We look forward to welcoming you at this free event! December 11th, 2012 9:00 – 13:00 GMT & 10:00 – 14:00 CET & 12:00 – 16:00 AST & 13:00 – 17:00 MSK & 14:30 – 18:30 IST WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. BlogTwitterLinkedInMixForumWiki Technorati Tags: ADF,ADF training,Fusion Developer day,webcast,WebLogic Specialization,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Internet unusably slow with Realtek Semiconductor Co., Ltd. RTL8111/8168B card

    - by user42424
    So I have recently installed Ubuntu 11.10 for a dual boot with wind 7. After the install I had like 300 updates, so I installed them. At first I could use the internet, although it was extremely slow. However now I cannot, sometimes it will load and others it will simply time out. When I try to download something it will either take forever or will not at all. This is a wired system. On Windows side my speeds are fine. Any help would be greatly appreciated. Also like I said I am new to Linux/Ubuntu so please be nice. One last thing, I also installed 11.10 for same dual boot on my laptop, and wireless speed is the same as on Windows? Only the wired desktop gives me the problem? Hear is some hardware info.. Hope it helps. Mobo: Gigabyte GA=880GMA- AMD / CPU: AMD Phenom (tm) IIx4 965 / 16 GB Ram / Realtek PCIe GBE Family Controller / Cisco Linksys E2000 / Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06) / eth0 Link encap:Ethernet HWaddr 50:e5:49:33:64:cf inet addr:192.168.1.118 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::52e5:49ff:fe33:64cf/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:76722 errors:0 dropped:76722 overruns:0 frame:76722 TX packets:49692 errors:0 dropped:65 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:107956638 (107.9 MB) TX bytes:4342553 (4.3 MB) Interrupt:44 Base address:0x2000 thanks to roadmr problem solved! I powered down PC, un plugged power from pc end, waited a few (maybe 3)minutes. plugged power back in, pushed and held power button for 30 + seconds. Let go, powered on PC, and my Internet is fine! downloads and web speed blaze, just like on my Win 7 boot, maybe even faster. Problem Solved, Thanks to all!! **

    Read the article

  • How can I get the best performance for playing games?

    - by Oli
    I've been playing a couple of Wine-games today and decided to switch to metacity to see what the performance difference was like. If you've never done it before, you just run metacity --replace but don't do that if you use Unity! Anyway, surprise surprise it was like playing on a dedicated Windows gaming machine. Playing under metacity today was bliss. Much higher framerates and just a fluidity that you'd expect from a native game. I'm not sure I can go back. Switching to metacity is no hardship but I wonder if there's anything else in the WM landscape that I should try out. I'm essentially looking for suggestions for the best way to play games. Mix up WMs, dedicated X sessions, whatever... As long as it makes Wine games run faster. Small print One process per answer (eg: New X session + OpenBox) We should probably land on a benchmark so we can show percentage improvement over a stock Compiz desktop. I'm open to suggestions in the comments. If people could test it and submit their how much it improves things for them in the comments, that would give others a good idea of if it's worth the pain.

    Read the article

  • What is appropriate way for managing MySQL connection through C#

    - by Sylca
    My question, at the bottom line, is what is the appropriate(best) way to manage our connection towards MySQL db with C#. Well, currently I'm working on some C# (winforms type) <- MySQL application and I've been looking at Server Connections in MySQL Administrator, been witness of execution of my mysql_queries, connection opens an closes, ... an so on! In my C# code I'm working like this and this is an example: public void InsertInto(string qs_insert) { try { conn = new MySqlConnection(cs); conn.Open(); cmd = new MySqlCommand(); cmd.Connection = conn; cmd.CommandText = qs_insert; cmd.ExecuteNonQuery(); } catch (MySqlException ex) { MessageBox.Show(ex.ToString()); } finally { if (conn != null) { conn.Close(); } } } Meaning, every time I want to insert something in db table I call this table and pass insert query string to this method. Connection is established, opened, query executed, connection closed. So, we could conclude that this is the way I manage MySQL connection. For me and my point of view, currently, this works and its enough for my requirements. Well, you have Java & Hibernate, C# & Entity Framework and I'm doing this :-/ and it's confusing me. Should I use MySQL with Entity Framework? What is the best way for collaboration between C# and MySQL? I don't want to worry about is connection that I've opened closed, can that same connection be faster, ...

    Read the article

  • Should I avoid or embrace asking questions of other developers on the job?

    - by T.K.
    As a CS undergraduate, the people around me are either learning or are paid to teach me, but as a software developer, the people around me have tasks of their own. They aren't paid to teach me, and conversely, I am paid to contribute. When I first started working as a software developer co-op, I was introduced to a huge code base written in a language I had never used before. I had plenty of questions, but didn't want to bother my co-workers with all of them - it wasted their time and hurt my pride. Instead, I spent a lot of time bouncing between IDE and browser, trying to make sense of what had already been written and differentiate between expected behavior and symptoms of bugs. I'd ask my co-workers when I felt that the root of my lack of understanding was an in-house concept that I wouldn't find on the internet, but aside from that, I tried to confine my questions to lunch hours. Naturally, there were occasions where I wasted time trying to understand something in code on the internet that had, at its heart, an in-house concept, but overall, I felt I was productive enough during my first semester, contributing about as much as one could expect and gaining a pretty decent understanding of large parts of the product. I was wondering what senior developers felt about that mindset. Should new developers ask more questions to get to speed faster, or should they do their own research for themselves? I see benefits to both mindsets, and anticipate a large variety of responses, but I figure new developers might appreciate your answers without thinking to ask this question.

    Read the article

< Previous Page | 97 98 99 100 101 102 103 104 105 106 107 108  | Next Page >