Search Results

Search found 23658 results on 947 pages for 'mixed case'.

Page 448/947 | < Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >

  • Don't miss Virtual Developer Day - All about ADF next week

    - by Shay Shmeltzer
    In case you haven't heard we are holding a free online virtual developer day next week - July 10th that you should attend - even if you think you already know ADF. First the registration link - http://bit.ly/fusiondev. While one of the tracks is aimed at developer who are relatively new to ADF - and cover ADF Faces, ADF Controller and a comparison of productivity with Forms and other tools - the two other tracks have great content on some topics that you might not be familiar with even if you already work with ADF. This include sessions about the upcoming ADF Mobile, The new ADF support in Eclipse and information about Application Life Cycle Management with ADF and JDeveloper. As well as sessions that will open your mind to the areas where ADF integrates with other Fusion Middleware Solutions such as ADF integration with BI, WebCenter and SOA. Most of the sessions are quite heavy on demos and you'll get a chance to interact with the presenters and ask questions during the live event. You should register even if you can't attend the live event - this way you'll get an email pointing you to the recorded sessions for on demand viewing. See you next week.

    Read the article

  • Back button after doing posts on the same page

    - by user441521
    I have 3 pages to my site. The 1st page allows you to select a bunch of options. Those options get sent to the 2nd page to be displayed with some data about those options. From here I can click on a link to get to page 3 on 1 of the options. On page 3 I can create a new/edit/delete all on the same page where reloads come back to page 3. I want a "back" button on page 3 to go back to page 2, but retain the options it had from the original page 1 request. Page 1 has a bunch of check boxes which are passed to page 2 as arrays to the controller. My thought is that I have to pass these arrays (I converted them to lists) to page 3 (even thought page 3 directly doesn't need them) so that page 3 can use them in the back link to it can recreate the values page 1 sent to page 2 originally. I'm using asp.net MVC and when I pass the converted array it seems to convert it to the type instead of actually showing the values: "types=System.Collections.Generic.List" (where types is a List. Is this what is needed or are there other options to getting a "back" button in this case to go back to page 2. It's sort of a pain to pass information to page 3 that isn't really relevant to page 3 except the back button.

    Read the article

  • Problems syncing photos and strange effects of uploaded files from other devices

    - by Daniel
    I have a Galaxy Spica (GT-i5700) Android v2.1, rooted with Leshak dev 7 #123. But never mind the root info, the problem would be the same unrooted. The photos from this phone is stored in "sdcard/images", nevertheless the phone also creates a "sdcard/DCIM" but only stores some thumbnails there. Problem nr 1: U1 only reads the DCIM-folder for automatic photo-upload. So photos stored in this phone is not uploaded. If I move photos to "DCIM" folder, U1 recognises the photos and start uploading them. Possible solution: Could there be an option in the settings, to set preferred photo folder? Problem nr 2: Out of 74 pictures, 12 did not get uploaded. Pressing "Retry failed transfers" in Settings does nothing. Pressing the files where status is "Upload failed, tap to retry" only changes the status to "Uploading..." but nothing gets uploaded. If I upload another file to U1, it is uploaded directly without any problem. It has nothing to do with file size, 1,1 MB files has been uploaded fine whilst some failed are 0,8 MB. Problem nr 3: The photos from DCIM are in my case uploaded to a folder called "Pictures - GT-I5700" in U1. If I log in to the homepage and from there upload another photo in "Pictures - GT-I5700", it shows up in U1 on my phone fine. But when I tap it, U1 downloads the photo to "sdcard/U1/Pictures - GT-I5700". If it sync photos from "sdcard/DCIM" to a specific folder, why not also download files to the same folder from which it is synced? After a while of usage, syncing and uploading files from different clients it would be a mishmash of folders and places files are stored and considering that I see no use of U1 at all. Another question: If my SD card in some way breaks down/some folders cannot be read/card temporarly changed and U1 is running, does U1 consider that as files deleted and also delete from the cloud?

    Read the article

  • "Missing Operating System" after installing Ubuntu 12.04 from a CD on a Macbook Pro

    - by Pierre
    I followed this guide to install Ubuntu 12.04 on my Macbook Pro 8,2 (late 2011): https://help.ubuntu.com/community/MactelSupportTeam/AppleIntelInstallation I used a CD. I synced the partition table on rEFIt, and it went fine. I do have an icon to boot on Linux, but when I launch it, after a few seconds I have "Missing Operating System" displayed, and that's all... How can I fix that? The only thing I see is, in the guide, it is mentioned this: On the last dialog of the installer, be sure to click the “Advanced” button and choose to install the boot loader (grub) to your root Ubuntu partition, for example /dev/sda3. This will be the only partition with the EXT4 file system. In Ubuntu 12.04 installation process, there is not such an option, but there is a dropdown menu to select where the grub bootloader should be installed. It was /dev/sda by default, but I selected my root Ubuntu partition (in my case, /dev/sda5). I got a warning message (but actually, it was the same warning message even when I selected /dev/sda), and I continued the installation... Thanks in advance for your help!

    Read the article

  • fail2ban on server with LXC Containers

    - by RoboTamer
    The issue is modprobe and iptables don't work inside an LXC Container. LXC is the userspace control package for Linux Containers, a lightweight virtual system mechanism sometimes described as “chroot on steroids”. iptables error inside the container is: # iptables -I INPUT -s 122.129.126.194 -j DROP > iptables v1.4.8: can't initialize iptables table `filter': Table does not exist (do you need to insmod?) Perhaps iptables or your kernel needs to be upgraded. I am guessing that it can't work because the LXC containers share one kernel, the main server kernel. How do I do fail2ban in this case. modprobe and iptables work in the main server so I could install it there and link to the logfiles somehow, my guess? Any suggestions?

    Read the article

  • How to avoid to be employed by companies which are candidates to DailyWTF stories?

    - by MainMa
    I'm reading The Daily WTF archives and especially those stories about IT-related companies which have a completely wrong approach of software development, the job of a developer, etc. Some stories are totally horrible: a company don't have a local network for security reasons, another one has a source control server which can only be accessed by the manager, etc. Add to it all those stories about managers who don't know anything about their work and make stupid decisions without listening to anybody. The thing is that I don't see how to know if you will be employed by such company during an interview. Of course, sometimes, an interviewer tells weird things which gives you an idea that something goes very wrong with the company (in my case, the last manager said I should work 100% of my time through Remote Desktop, connected to on an old and slooooow machine, because "it avoids several people to modify the same source code"; maybe I should explain him what SVN is). But in most cases, you will be unable to get enough information during the interview to get the exact image of a company. So how to avoid being employed by this sort of companies? I thought about asking to see some documents like documentation guide or code style guidelines. The problem is that I live in France, and here, most of the companies don't have those documents at all, and in the rare cases where those documents exist, they are outdated, poorly written, never used, or do force you to make things that don't make any sense. I also thought about asking to see how programmers actually work. But seeing that they have dual screens or "late-modern-artsy-fartsy furnishings" doesn't mean that they don't have people making weird decisions, making it impossible to work there. Have you been in such situations? What have you tried? Have it worked?

    Read the article

  • Linux QoS: bulk data transmission during idle times

    - by syneticon-dj
    How would I do a QoS setup where a certain low-priority data stream would get up to X Mbps of bandwidth, but only if the current total bandwidth (of all streams/classes) on this interface does not exceed X? At the same time, other data streams / classes must not be limited to X. The use case is an ISP billing the traffic by calculating the bandwidth average over 5 minute intervals and billing the maximum. I would like to keep the maximum usage to a minimum (i.e. quench the bulk transfer during interface busy times) but get the data through during idle/low traffic times. Looking at the frequently used classful schedulers CBQ, HTB and HSFC I cannot see a straightforward way to accomplish this.

    Read the article

  • Two mail servers, need help with dns configuration for the backup one

    - by user92231
    I need to run a redundant backup mail server in case the main one goes down. The settings in GoDaddy look something like the following: A (Host) Host Points to @ ip address of mail1 41.x.x.x mail1 ip address of mail1 41.x.x.x mail2 ip address of mail2 196.x.x.x MX Priority host points to 10 @ mail1.mydomain.com 20 @ mail2.mydomain.com When mail1 goes down, mail2 is able to get emails. I can access it through the browser with no problem, but I want my users to able to pop3/smtp as well without changing anything in their outlook. I dont want any impact to the users when mail1 is down. Also, I'm using the windows server DFS to keep both folders of the mails in sync. Is this the right way, or should I be using something else?

    Read the article

  • iptables ACCEPT policy

    - by kamae
    In Redhat EL 6, iptables INPUT policy is ACCEPT but INPUT chain has REJECT entry in the end. /etc/syconfig/iptables is as below: *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT Do you know why the policy is ACCEPT not DROP? I think setting DROP policy is safer than ACCEPT in case to make mistake in the chain. Actually the policy is not applied to any packet: # iptables -L -v Chain INPUT (policy ACCEPT 0 packets, 0 bytes)

    Read the article

  • Lucene best practice

    - by Dragos
    I am trying to understand how Lucene should be used. From what I have read, creating an IndexReader is costly, so using a Search Manager shoulg be the right choice. However, a SearchManager should be produced by a NRTManager(which, by the way, should replace the IndexWriter for every add or delete operation performed). But in order to have a NRTManager, I should first have an IndexWriter, and here comes my problem. The documentation says: an IndexWriter is thread-safe the constructor of this class takes a Directory object, so it seems creating an instace should be costly(as in the case of an IndexReader) all changes are buffered and flushed periodically(so they seem to encourage using a single instance) but: the changes, although flushed will only be visible after commit or close after finished making updates(add/delete), the instance should be closed I also found this: http://stackoverflow.com/questions/5374419/forgot-to-close-the-lucene-indexwriter-after-adding-documents-to-the-index where it is said that not closing a writer might ruin everything So what am I really supposed to do? Is having a single IndexWriter instance a good idea(make only commit and never close it)? EDIT: What is more, if I use NRTManager, how can I make acommit`? Is it even possible?

    Read the article

  • Volume randomly turning itself down on Windows 7 64-bit

    - by Arda Xi
    This is the weirdest issue I've ever encountered with my PC. Every so often, my sound will start playing back at a lower volume. This happens when watching video, listening to music, all independently. It usually lasts anywhere up to a minute, after which it will turn up again. The weird thing about it is that the volume control in Windows remains at 100%, even though the volume is audibly a lot lower. (No, I'm not going deaf, it's just my PC. I checked.) I just have no idea where to go to troubleshoot this, even. I'm using Windows 7 64-bit with an on-board Realtek sound-card. Oh, just in case someone finds this question on Google or whatever, making sure this is on Do Nothing may fix your problem. Unfortunately, it did not work for me. My settings all seem fine. This is my audio slider. (Took the screenshot while the issue showed.)

    Read the article

  • sudo su - username while keeping ssh key forwarding

    - by Florian Schulze
    If I have a server A into which I can login with my ssh key and I have the ability to "sudo su - otheruser", I lose key forwarding, because the env variables are removed and the socket is only readable by my original user. Is there a way I can bridge the key forwarding through the "sudo su - otheruser", so I can do stuff on a server B with my forwarded key (git clone and rsync in my case)? The only way I can think of is adding my key to authorized_keys of otheruser and "ssh otheruser@localhost", but that's cumbersome to do for every user and server combination I may have.

    Read the article

  • How do you support your code post employment end?

    - by James
    What is the process for leaving a company (or even a group/division) in terms of code support? Is it best to handle all questions? Do you give the remaining developers access to yourself as a future resource? If so, is there a way to not give full access? I've experienced first hand where answers about the general software arthitecture from the initial developer would be invaluable. I understand that if serious assistance is needed, than it becomes a typical case of employment negotiation as a support contract. However, should serious assistance be required, what steps can you make to ease that process of contacting you? I was thinking of doing something like making a (YOUR_NAME)_codesupport @ (YOUR_FAVORITE_EMAIL_CLIENT).com address. My Situation Specifics: I'm a co-op student, and as such bounce around companies on 4-month stints. This means introducing myself to a lot of new code bases, as well as leaving a fair share of orphaned code behind when I leave a company. I feel bad if I leave junk code around.

    Read the article

  • Can DVCSs enforce a specific workflow?

    - by dukeofgaming
    So, I have this little debate at work where some of my colleagues (which are actually in charge of administrating our Perforce instance) say that workflows are strictly a process thing, and that the tools that we use (in this case, the version control system) have no take on it. In otherwords, the point that they make is that workflows (and their execution) are tool-agnostic. My take on this is that DVCSs are better at encouraging people in more flexible and well-defined ways, because of the inherent branching occurring in the background (anonymous branches), and that you can enforce workflows through the deployment model you establish (e.g. pull requests through repository management, dictator/liutenant roles with their machines setup as servers, etc.) I think in CVCSs you have to enforce workflows through policies and policing, because there is only one way to share the code, while in DVCSs you just go with the flow based on the infrastructure/permissions that were setup for you. Even when I have provided the earlier arguments, I'm still unable to fully convince them. Am I saying something the wrong way?, if not, what other arguments or examples do you think would be useful to convince them? Edit: The main workflow we have been focusing on, because it makes sense to both sides is the Dictator/Lieutenants workflow: My argument for this particular workflow is that there is no pipeline in a CVCS (because there is just sharing work in a centralized way), whereas there is an actual pipeline in DVCSs depending on how you deploy read/write permissions. Their argument is that this workflow can be done through branching, and while they do this in some projects (due to policy/policing) in other projects they forbid developers from creating branches.

    Read the article

  • OOW - Oracle Identity Management Demos

    - by B Shashikumar
    If you are in San Francisco or in the vicinity of the city, it must be hard not to feel the OpenWorld vibe in the city. Oracle OpenWorld is now in high gear. If you haven’t already checked out the Identity Management demo grounds in Moscone South, don’t miss it. This year, the Oracle IDM product team has pulled out all stops to bring together one of the most exciting set of demos we have seen. The 9 Identity Management demos are all designed to prove why Oracle Identity Management is the most innovative and integrated solution in the world. Each demo validates several real world use case scenarios that need an end to end solution. And this year, there is an added bonus. If you check out all the 9 IDM demos, you can enter to win an Apple TV.  Just grab an entry form from here or from one of the IDM demo stations. Visit all nine IDM demos and get your form signed by the demo staff. Submit your form to be entered into a drawing for an Apple TV. Here is the complete lineup of all the Identity Management demos. Make sure you check us out.

    Read the article

  • Execute local script requiring arguments on Linux via plink

    - by c_maker
    Is it possible to execute (from windows) a local script with arguments on a remote linux system? Here's what I got: plink 1.2.3.4 -l root -pw mypassword -m hello.sh Is there a way to do this same thing, but able to give input parameters to hello.sh? I've tried many things, including: plink 1.2.3.4 -l root -pw mypassword -m hello.sh input1 input2 In this case it seems that plink thinks that input1 and input2 are its arguments.. which makes sense. What are my options?

    Read the article

  • JavaApplicationStub Compatibility with all java versions

    - by neelamsharma
    Hi, I am running a Java application on Mac Leopard & on Mac Mini. My Mac Mini upgraded, it is using Java Version 1.6.0_17 and Mac Leopard have version 1.5.0_13. To run that application on Mac Mini I have to copy JavaApplicationStub from path -"/System/Library/Frameworks/JavaVM.framework/Resources/MacOS/JavaApplicationStub" to my application "Content/Resources/Java". But this application successfully run on Mac Leopard. If I do not copy stub at Mac Mini then I am unable to launch application using app by double click, it shows icon for few second then disappears, so in that case I have to run application with its jar. I want that without copying explicitly JavaApplicationStub, how can I run app with all versions of java. If this possible then what changes I have to made? The info.plist file contain JVMVersion is 1.4+. Thanks!

    Read the article

  • What is a good way to share internal helpers?

    - by toplel32
    All my projects share the same base library that I have build up over quite some time. It contains utilities and static helper classes to assist them where .NET doesn't exactly offer what I want. Originally all the helpers were written mainly to serve an internal purpose and it has to stay that way, but sometimes they prove very useful to other assemblies. Now making them public in a reliable way is more complicated than most would think, for example all methods that assume nullable types must now contain argument checking while not charging internal utilities with the price of doing so. The price might be negligible, but it is far from right. While refactoring, I have revised this case multiple times and I've come up with the following solutions so far: Have an internal and public class for each helper The internal class contains the actual code while the public class serves as an access point which does argument checking. Cons: The internal class requires a prefix to avoid ambiguity (the best presentation should be reserved for public types) It isn't possible to discriminate methods that don't need argument checking   Have one class that contains both internal and public members (as conventionally implemented in .NET framework). At first, this might sound like the best possible solution, but it has the same first unpleasant con as solution 1. Cons: Internal methods require a prefix to avoid ambiguity   Have an internal class which is implemented by the public class that overrides any members that require argument checking. Cons: Is non-static, atleast one instantiation is required. This doesn't really fit into the helper class idea, since it generally consists of independent fragments of code, it should not require instantiation. Non-static methods are also slower by a negligible degree, which doesn't really justify this option either. There is one general and unavoidable consequence, alot of maintenance is necessary because every internal member will require a public counterpart. A note on solution 1: The first consequence can be avoided by putting both classes in different namespaces, for example you can have the real helper in the root namespace and the public helper in a namespace called "Helpers".

    Read the article

  • How Can I Effectively Interview an Oracle Candidate?

    - by Tim Medora
    First, I browsed through SO for matching questions and didn't find one, but please point me in the right direction if this exact question has already been asked. I work with and around programmers of various skill levels on various platforms. I would consider my skills to be strong in terms of relational database design, query development, and basic performance tuning and administration. I'm mid-level when it comes to database theory. My team is looking to me to ensure that we have the best talent on staff, in this case, an engineer experienced in Oracle administration. To me, a well-rounded database administrator, regardless of platform, should also be competent in developing against the database so that is also a requirement. However my database skills are centralized around SQL Server 200x with experience in a few other products like SAP MaxDB, Access, and FoxPro. How can I thoroughly assess the skills of an Oracle engineer? I can ask high-level database theory questions and talk about routine tasks that are common across platforms, but I want to dig deep enough that I can be confident in the people I hire. Normally, I would alternate very specific questions that have a right/wrong answer with architectural questions that might have several valid answers. Does anyone have an interview template, specific questions, or any other knowledge that they can share? Even knowing the meaningful Oracle-related certifications would be a help. Thank you. EDIT: All the answers have been very helpful so far and I have given upvotes to everyone. I'm surprised that there are already 3 close votes on this question as "off topic". To be clear, I am specifically asking how a MS SQL Server engineer (like myself) can effectively interview a person with different but symbiotic skills. The question has already received specific, technical answers which have improved my own database design and programming skills. If this is more appropriate as a community wiki, please convert it.

    Read the article

  • Cannot use second display with 12.04 and Intel 2000/3000

    - by Carolyn Marenger
    I am unable to get anything to display on my second monitor, or even get the system to recognize that there is a second screen. I am running Ubuntu 12.04 on a Gigabyte GA-H61M-S2PV, revision 2.0 box. The integrated chipset is an Intel 2000/3000, and there are both a D-Dub and a DVI-D display ports on the MB. This is the first operating system I have installed on this system. I have a second monitor plugged into the DVI-D port via a DVI-D to D-Sub adapter. I cannot verify that the motherboard or adapter were/are working, short of installing windows to test the theory. When I go into the "System Settings - Displays" control window, it shows one display. I have rebooted with the second monitor attached, and I have perused the BIOS settings in case it might have been disabled. So far, I have had no indication that the second monitor is recognized, not even a flicker at power on. If I swap monitors and cables between the DVI-D and D-Dub ports, the other monitor lights up, so I know the monitor and video cable are not the issue. Any suggestions would be appreciated. Thanks, Carolyn

    Read the article

  • Thinkpad W510 with default graphics drivers shows weird brightness issues

    - by Chantz
    Hey guys, I am currently running 10.10 - 32 bit on a new Thinkpad W510 with nVidia Quadro FX 880M graphics card. I am running with the default graphics drivers that installed with ubuntu install. My problem is that when I am logging in the screen acts normally as far as birghtness is concerned. I can increase/decrease brightness with Fn keys. But few seconds after I log in screen goes pitch dark. Hitting Fn+Home flickers the screen to all the way bright, then all the way dark. This behavior continues until I reach maximum brightness, in which case the screen stays all the way bright, for a few more seconds and then again goes dark if there is no activity & the cycle continues. Have you guys faced any of these issues? If so any pointers on how to resolve it. I am not alone, on ubuntu forum I saw another person having the same issue - link but no solution. Please help! UPDATE I followed the instructions that htorque mentions in his answer and it worked.

    Read the article

  • How to put a rgb image on gnuplot 3d plot?

    - by Stefano Borini
    I want to plot an rgb image with gnuplot, and it must be hovering inside a 3d plot box, so that it can act as a "floor" for my data. How can I do it ? THe gnuplot examples use a heatmap to map the rgb values, but this is not what I want. Apart from receiving an error GNUPLOT (plot_image): Color boxes cannot handle RGB components. It is not what I want in any case. I need the full rgb data in the box, not mapped through a heatmap.

    Read the article

  • Capturing BizTalk 2004 SQLAdapter failures

    - by DanBedassa
    I was recently working on a BizTalk 2004 project where I encountered an issue with capturing exceptions (inside my orchestration) occurring from an external source. Like database server down, non-existing stored procedure, …   I thought I might write-up this in case it might help someone …   To reproduce an issue, I just rename the database to something different.   The orchestration was failing at the point where I make a SQL request via a Response-Request Port. The exception handlers were bypassed but I can see a warning in the event log saying: "The adapter failed to transmit message going to send port "   After scratching my head for a while (as a newbie to BTS 2004) to find a way to catch the exceptions from the SQLAdapter in an orchestration, here is the solution I had.   ·         Put the Send and Receive shapes inside a Scope shape ·         Set the Scope’s transaction type to “Long Running” ·         Add a Catch block expecting type “System.Exception” ·         Set the “Delivery Notification” of the associated Port to “Transmitted” ·         Change the “Retry Count” of the associated port to 0 (This will make sure BizTalk will raise the exception, instead of a warning, and you can capture that) ·         Now capture and do whatever with the exception inside the Catch block

    Read the article

  • Failure Driven Development

    - by DevSolo
    At our shop, we strive to be agile. And I'd say we are making great strides. That said, a few of us have spotted a pattern we have started calling "Failure Driven Development". Failure Driven Development can basically be desribed as an agile release/iteration cycle where the bugs/features are guided not by tasks and stories with acceptance criteria, but with defects entered in the defect tracking software. Our team has a great Project Manager who strives to get the acceptance criteria from the customer(s), but it's not always possible. From my development chair, this is due to the customer either not knowing exactly what they want or (and this is the kicker) two different "camps" at the customer's main office conflict with how a story should be implemented. Camp A will losely dictate that Feature X works like this, then Camp B will fail it due not functioning like that. Hence, the term "FDD". The process is driven by "failures". This leads to my question: Has anyone else encountered this and if so, any tips/suggestions for dealing with it? We have, of course, tried to get Camp A and B to agree prior, but everyone knows this isn't always the case. Thanks

    Read the article

  • CPU fan turns on and off repeatedly without booting

    - by rnso
    My PC has suddenly stopped starting up. On checking with case open, the CPU fan starts running when the power is turned on, but it stops after about 2-3 seconds, restarts again after about 2-3 seconds and the loop is repeated. There is no beep and nothing appears on the screen. On searching the internet, I found there could be several reasons for this. I tried removing hard-disk, CD drive, tightening connections etc but of no avial. I also tried using a new power supply but the response is the same. Where could be the problem and how can it be solved? Thanks in advance.

    Read the article

< Previous Page | 444 445 446 447 448 449 450 451 452 453 454 455  | Next Page >