Search Results

Search found 15385 results on 616 pages for 'context menu'.

Page 480/616 | < Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >

  • Profiling Startup Of VS2012 &ndash; Ants Profiler

    - by Alois Kraus
    I just downloaded ANTS Profiler 7.4 to check how fast it is and how deep I can analyze the startup of Visual Studio 2012. The Pro version which is useful does cost 445€ which is ok. To measure a complex system I decided to simply profile VS2012 (Update 1) on my older Intel 6600 2,4GHz with 3 GB RAM and a 32 bit Windows 7. Ants Profiler is really easy to use. So lets try it out. The Ants Profiler does want to start the profiled application by its own which seems to be rather common. I did choose Method Level timing of all managed methods. In the configuration menu I did want to get all call stacks to get full details. Once this is configured you are ready to go.   After that you can select the Method Grid to view Wall Clock Time in ms. I hate percentages which are on by default because I do want to look where absolute time is spent and not something else.   From the Method Grid I can drill down to see where time is spent in a nice and I can look at the decompiled methods where the time is spent. This does really look nice. But did you see the size of the scroll bar in the method grid? Although I wanted all call stacks I do get only about 4 pages of methods to drill down. From the scroll bar count I would guess that the profiler does show me about 150 methods for the complete VS startup. This is nonsense. I will never find a bottleneck in VS when I am presented only a fraction of the methods that were actually executed. I have also tried in the configuration window to also profile the extremely trivial functions but there was no noticeable difference. It seems that the Ants Profiler does filter away way too many details to be useful for bigger systems. If you want to optimize a CPU bound operation inside NUnit then Ants Profiler is with its line level timings a very nice tool to work with. But for bigger stuff it is certainly not usable. I also do not like that I must start the profiled application from the profiler UI. This makes it hard to profile processes which are started by some other process. Next: JetBrains dotTrace

    Read the article

  • Switch from back-end to front-end programming: I'm out of my comfort zone, should I switch back?

    - by ripper234
    I've been a backend developer for a long time, and I really swim in that field. C++/C#/Java, databases, NoSql, caching - I feel very much at ease around these platforms/concepts. In the past few years, I started to taste end-to-end web programming, and recently I decided to take a job offer in a front end team developing a large, complex product. I wanted to break out of my comfort zone and become more of an "all around developer". Problem is, I'm getting more and more convinced I don't like it. Things I like about backend programming, and missing in frontend stuff: More interesting problems - When I compare designing a server that handle massive data, to adding another form to a page or changing the validation logic, I find the former a lot more interesting. Refactoring refactoring refactoring - I am addicted to Visual Studio with Resharper, or IntelliJ. I feel very comfortable writing code as it goes without investing too much thought, because I know that with a few clicks I can refactor it into beautiful code. To my knowledge, this doesn't exist at all in javascript. Intellisense and navigation - I hate looking at a bunch of JS code without instantly being able to know what it does. In VS/IntelliJ I can summon the documentation, navigate to the code, climb up inheritance hiererchies ... life is sweet. Auto-completion - Just hit Ctrl-Space on an object to see what you can do with it. Easier to test - With almost any backend feature, I can use TDD to capture the requirements, see a bunch of failing tests, then implement, knowing that if the tests pass I did my job well. With frontend, while tests can help a bit, I find that most of the testing is still manual - fire up that browser and verify the site didn't break. I miss that feeling of "A green CI means everything is well with the world." Now, I've only seriously practiced frontend development for about two months now, so this might seem premature ... but I'm getting a nagging feeling that I should abandon this quest and return to my comfort zone, because, well, it's so comfy and fun. Another point worth mentioning in this context is that while I am learning some frontend tools, a lot of what I'm learning is our company's specific infrastructure, which I'm not sure will be very useful later on in my career. Any suggestions or tips? Do you think I should give frontend programming "a proper chance" of at least six to twelve months before calling it quits? Could all my pains be growing pains, and will they magically disappear as I get more experienced? Or is gaining this perspective is valuable enough, even if plan to do more "backend stuff" later on, that it's worth grinding my teeth and continuing with my learning?

    Read the article

  • AWS EC2 Oracle RDB - Storing and managing my data

    - by llaszews
    When create an Oracle Database on the Amazon cloud you will need to store you database files somewhere on the EC2 cloud. There are basically three places where database files can be stored: 1. Local drive - This is the local drive that is part of the virtual server EC2 instance. 2. Elastic Block Storage (EBS) - Network attached storage that appears as a local drive. 3. Simple Storage Server (S3) - 'Storage for the Internet'. S3 is not high speed and intended for store static document type files. S3 can also be used for storing static web page files. Local drives are ephemeral so not appropriate to be used as a database storage device. The leaves EBS which is the best place to store database files. EBS volumes appear as local disk drives. They are actually network-attached to an Amazon EC2 instance. In addition, EBS persists independently from the running life of a single Amazon EC2 instance. If you use an EBS backed instance for your database data, it will remain available after reboot but not after terminate. In many cases you would not need to terminate your instance but only stop it, which is equivalent of shutdown. In order to save your database data before you terminate an instance, you can snapshot the EBS to S3. Using EBS as a data store you can move your Oracle data files from one instance to another. This allows you to move your database from one region or or zone to another. Unfortunately, to scale out your Oracle RDS on AWS you can not have read only replicas. This is only possible with the other Oracle relational database - MySQL. The free micro instances use EBS as its storage. This is a very good white paper that has more details: AWS Storage Options This white paper also discusses: SQS, SimpleDB, and Amazon RDS in the context of storage devices. However, these are not storage devices you would use to store an Oracle database. This slide deck discusses a lot of information that is in the white paper: AWS Storage Options slideshow

    Read the article

  • Loose Coupling and UX Patterns for Applications Integrations

    - by ultan o'broin
    I love that software architecture phrase loose coupling. There’s even a whole book about it. And, if you’re involved in enterprise methodology you’ll know just know important loose coupling is to the smart development of applications integrations too. Whether you are integrating offerings from the Oracle partner ecosystem with Fusion apps or applications coexistence scenarios, loose coupling enables the development of scalable, reliable, flexible solutions, with no second-guessing of technology. Another great book Enterprise Integration Patterns: Designing, Building, and Deploying Messaging Solutions tells us about loose coupling benefits of reducing the assumptions that integration parties (components, applications, services, programs, users) make about each other when they exchange information. Eliminating assumptions applies to UI development too. The days of assuming it’s enough to hard code a UI with linking libraries called code on a desktop PC for an office worker are over. The book predates PaaS development and SaaS deployments, and was written when web services and APIs were emerging. Yet it calls out how using middleware as an assumptions-dissolving technology “glue" is central to applications integration. Realizing integration design through a set of middleware messaging patterns (messaging in the sense of asynchronously communicating data) that enable developers to meet the typical business requirements of enterprises requiring integrated functionality is very Fusion-like. User experience developers can benefit from the loose coupling approach too. User expectations and work styles change all the time, and development is now about integrating SaaS through PaaS. Cloud computing offers a virtual pivot where a single source of truth (customer or employee data, for example) can be experienced through different UIs (desktop, simplified, or mobile), each optimized for the context of the user’s world of work and task completion. Smart enterprise applications developers, partners, and customers use design patterns for user experience integration benefits too. The Oracle Applications UX design patterns (and supporting guidelines) enable loose coupling of the optimized UI requirements from code. Developers can get on with the job of creating integrations through web services, APIs and SOA without having to figure out design problems about how UIs should work. Adding the already user proven UX design patterns (and supporting guidelines to your toolkit means ADF and other developers can easily offer much more than just functionality and be super productive too. Great looking application integration touchpoints can be built with our design patterns and guidelines too for a seamless applications UX. One of Oracle’s partners, Innowave Technologies used loose coupling architecture and our UX design patterns to create an integration for a customer that was scalable, cost effective, fast to develop and kept users productive while paving a roadmap for customers to keep pace with the latest UX designs over time. Innowave CEO Basheer Khan, a Fusion User Experience Advocate explains how to do it on the Usable Apps blog.

    Read the article

  • RTS Game Style Application [closed]

    - by Daniel Wynand van Wyk
    My question may seem somewhat odd, but I hope that my specifications will clarify EXACTLY what it is that I am after. I need some help choosing the right tooling for a particular endeavour. My background is in desktop application development and large back-end systems. I have worked primarily on the Microsoft stack using C# and the .Net framework. My goal is to develop a 2D, RTS style, interactive office simulation. The simulation will model various office spaces, office equipment, employees and their interactions with one another. The idea is to abstract the concept of an office completely. Under the hood the application will do many things that are nothing like a game. This includes P2P networking, VPN tunnelling, streaming video, instant messaging, document collaboration, remote screen sharing, file-sharing, virus scanning, VOIP, document scanning, faxing, emailing, distributed computing, content management and much more! A somewhat similar thing has been attempted by IBM, where they created a virtual office in second life. If their attempt was a game, the game-play would be notably horrible, to say the least! The users/players will drive and control my application through the various objects modelled in the simulation. A single application capable of performing all of these various tasks would be a nightmare to navigate for even the most expert user. Using the concept of a game, I can easily separate functionality by assigning them to objects that relate 1-1 with their real world counter-parts. This can greatly simplify computing for novice users, with many added benefits in terms of visibility, transparency of process and centralized configuration. My hope is to make complex computing tasks accessible to all kinds of users and to greatly reduce the cognitive load associated with using the many different utilities and applications inside office settings. The complexity is therefore limited to the complexity of the space in which you find yourself. I want the application to target as many platforms as possible and run on computers that have no accelerated graphics capabilities. The simulation won't contain any of the fancy eye-candy you find in modern games, to the contrary, my "game" will purposefully be clean and simple. The closest thing I could imagine would be an old game like "Theme Hospital" or the first instalment of "The Sims". All the content will be pre-created and not user-generated like Second Life. New functionality will be added via a plugin system. Given my background and nature of my "game", I would like to spend most of my time writing code that does not have to do with the simulated office, as the "game" is really just a glorified application menu. I have done much reading about existing engines, frameworks and tools. I need the help of an experienced game developer who has tried and tested various products over the years who can guide me in the right direction given my very particular needs. I would appreciate any help I can get!

    Read the article

  • The Minimalist's Approach to Content Governance

    - by Kellsey Ruppel
    This week on the blog, we want to focus on the content lifecylce and how important it is to have the tools in place to be able to properly manage all te phases of the content lifecylce. John Brunswick has some great advice when it comes to this topic, so expect to hear a lot from him this week! Originally posted by John Brunswick. Let's be honest - content governance is far from an exciting topic. BUT the potential of a very small intranet team creating and maintaining a platform that provides an organization with relevant, high value information, helping workers to get their jobs done with greater accuracy and in less time is exciting. It is easy to quickly start producing content, but the challenge is ensuring that the environment is easy to navigate and use on the third week and during the third year.   What can be done to bridge this gap? Over the next few blog entries let's take a pragmatic, minimalistic view of a process that can help any team manage a wealth of unstructured information. Based on an earlier article that I wrote around Portal Governance, I am going to focus on using technology as much as possible to support the governance of content with minimal involvement from users. The only certainty about content production is that business users are not fans of maintaining content. Maintenance is overhead and is a long-term investment thats value will possibly not be realized under the current content creator's watch. To add context to how we will use technical tools in this process, each post will highlight one section of the content lifecycle process as outlined below Content Lifecycle Stages 1. Request - Understand the education, purpose, resource and success criteria for content 2. Create - Determine access and workflow for content 3. Manage - Understand ownership and review cycles 4. Retire - Act on thresholds established during the request stage Within each state we will also elaborate as to 1. Why - why would we entertain doing this? 2. How - the steps that are needed to make it happen 3. Impact - what is the net benefit or loss based on the process Over the course of this week, we will dive deep into the stages and the minimal amount of time, effort and process within each to make some meaningful gains in the improvement of user experience and productivity in their search for information. It might be a stretch to say that we can make content governance exciting, but hopefully it can end up being painless and paying dividends. And if you'd like to hear first hand from a customer that is managing their content lifecycle with Oracle WebCenter, be sure to join us on Wednesday for this webcast "ResCare Solves Content Lifecycle Challenges with Oracle WebCenter"!

    Read the article

  • Second Monitor stays black/in power save mode

    - by Rob
    I'm using two Monitors, a Belinea o.display 1 (Recognized as a Rogen Tech Distribution Inc 20" by Ubuntu, but working fine) on the DVI-Output (connected via DVI-to-VGA-adapter) as my primary Monitor and a Dell 19" (Recognized correctly) on the HDMI-output (via HDMI-to-DVI adapter) as secondary monitor. The graphics controller is a GeForce 9500 GS. I'm running a fully updated Ubuntu 13.04 with nouveau 1:1.0.7-0ubuntu1. The problem is that the second monitor (Dell) never seems to come out of standby during boot: the screen stays black and the status led on the monitor stays orange (it's green when it's on). It is correctly recognized an the size of the desktop is set accordingly, it just stays black. Changing any setting via xrandr/arandr/etc. does nothing. The on-screen-menu of the monitor reports it to be in power save mode. When using the proprietary NVIDIA-Drivers, the second monitor works just find. But these drivers cause a lot of other problems on my system, so i would really like to avoid them. On Ubuntu 12.10 i had found a workaround: When moving the relative position of the second monitor slightly down and the up again, it would turn on and function normally: xrandr --output DVI-I-1 --mode 1680x1050 --pos 1280x0 --rotate normal --output HDMI-1 --mode 1280x1024 --pos 0x88 --rotate normal sleep 2 xrandr --output DVI-I-1 --mode 1680x1050 --pos 1280x0 --rotate normal --output HDMI-1 --mode 1280x1024 --pos 0x0 --rotate normal This workaround stop working after the update to 13.04, and now i'm looking for a new solution. Has anyone experienced something similarity? xrandr output: Screen 0: minimum 320 x 200, current 2960 x 1050, maximum 8192 x 8192 DVI-I-1 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 433mm x 270mm 1680x1050 60.0*+ 1280x1024 75.0 60.0 1280x960 60.0 1152x864 75.0 1024x768 75.1 72.0 70.1 60.0 832x624 74.6 800x600 72.2 75.0 60.3 56.2 640x480 72.8 75.0 66.7 60.0 720x400 70.1 HDMI-1 connected 1280x1024+1680+0 (normal left inverted right x axis y axis) 376mm x 301mm 1280x1024 60.0*+ 75.0 1152x864 75.0 1024x768 75.1 60.0 800x600 75.0 60.3 640x480 75.0 60.0 720x400 70.1 lshw -c video: *-display Beschreibung: VGA compatible controller Produkt: G96 [GeForce 9500 GS] Hersteller: NVIDIA Corporation Physische ID: 0 Bus-Informationen: pci@0000:01:00.0 Version: a1 Breite: 64 bits Takt: 33MHz Fähigkeiten: pm msi pciexpress vga_controller bus_master cap_list rom Konfiguration: driver=nouveau latency=0 Ressourcen: irq:16 memory:fa000000-faffffff memory:d0000000-dfffffff memory:f8000000-f9ffffff ioport:df00(Größe=128) memory:fb000000-fb07ffff Thanks for your help!

    Read the article

  • (SOLVED) Problems Rendering Text in OpenGL Using FreeType

    - by Sean M.
    I've been following both the FreeType2 tutorial and the WikiBooks tuorial, trying to combine things from them both in order to load and render fonts using the FreeType library. I used the font loading code from the FreeType2 tutorial and tried to implement the rendering code from the wikibooks tutorial (tried being the keyword as I'm still trying to learn model OpenGL, I'm using 3.2). Everything loads correctly and I have the shader program to render the text with working, but I can't get the text to render. I'm 99% sure that it has something to do with how I cam passing data to the shader, or how I set up the screen. These are the code segments that handle OpenGL initialization, as well as Font initialization and rendering: //Init glfw if (!glfwInit()) { fprintf(stderr, "GLFW Initialization has failed!\n"); exit(EXIT_FAILURE); } printf("GLFW Initialized.\n"); //Process the command line arguments processCmdArgs(argc, argv); //Create the window glfwWindowHint(GLFW_SAMPLES, g_aaSamples); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 2); g_mainWindow = glfwCreateWindow(g_screenWidth, g_screenHeight, "Voxel Shipyard", g_fullScreen ? glfwGetPrimaryMonitor() : nullptr, nullptr); if (!g_mainWindow) { fprintf(stderr, "Could not create GLFW window!\n"); closeOGL(); exit(EXIT_FAILURE); } glfwMakeContextCurrent(g_mainWindow); printf("Window and OpenGL rendering context created.\n"); glClearColor(0.2f, 0.2f, 0.2f, 1.0f); //Are these necessary for Modern OpenGL (3.0+)? glViewport(0, 0, g_screenWidth, g_screenHeight); glOrtho(0, g_screenWidth, g_screenHeight, 0, -1, 1); //Init glew int err = glewInit(); if (err != GLEW_OK) { fprintf(stderr, "GLEW initialization failed!\n"); fprintf(stderr, "%s\n", glewGetErrorString(err)); closeOGL(); exit(EXIT_FAILURE); } printf("GLEW initialized.\n"); Here is the font file (it's slightly too big to post): CFont.h/CFont.cpp Here is the solution zipped up: [solution] (https://dl.dropboxusercontent.com/u/36062916/VoxelShipyard.zip), if anyone feels they need the entire solution. If anyone could take a look at the code, it would be greatly appreciated. Also if someone has a tutorial that is a little more user friendly, that would also be appreciated. Thanks.

    Read the article

  • SSH from external network refused

    - by wulfsdad
    I've installed open-ssh-server on my home computer(running Lubuntu 12.04.1) in order to connect to it from school. This is how I've set up the sshd_config file: # Package generated configuration file # See the sshd_config(5) manpage for details # What ports, IPs and protocols we listen for #Port 22 Port 2222 # Use these options to restrict which interfaces/protocols sshd will bind to #ListenAddress :: #ListenAddress 0.0.0.0 Protocol 2 # HostKeys for protocol version 2 HostKey /etc/ssh/ssh_host_rsa_key HostKey /etc/ssh/ssh_host_dsa_key HostKey /etc/ssh/ssh_host_ecdsa_key #Privilege Separation is turned on for security UsePrivilegeSeparation yes # Lifetime and size of ephemeral version 1 server key KeyRegenerationInterval 3600 ServerKeyBits 768 # Logging SyslogFacility AUTH #LogLevel INFO LogLevel VERBOSE # Authentication: LoginGraceTime 120 PermitRootLogin no StrictModes yes RSAAuthentication yes PubkeyAuthentication yes #AuthorizedKeysFile %h/.ssh/authorized_keys # Don't read the user's ~/.rhosts and ~/.shosts files IgnoreRhosts yes # For this to work you will also need host keys in /etc/ssh_known_hosts RhostsRSAAuthentication no # similar for protocol version 2 HostbasedAuthentication no # Uncomment if you don't trust ~/.ssh/known_hosts for RhostsRSAAuthentication #IgnoreUserKnownHosts yes # To enable empty passwords, change to yes (NOT RECOMMENDED) PermitEmptyPasswords no # Change to yes to enable challenge-response passwords (beware issues with # some PAM modules and threads) ChallengeResponseAuthentication no # Change to no to disable tunnelled clear text passwords #PasswordAuthentication yes # Kerberos options #KerberosAuthentication no #KerberosGetAFSToken no #KerberosOrLocalPasswd yes #KerberosTicketCleanup yes # GSSAPI options #GSSAPIAuthentication no #GSSAPICleanupCredentials yes X11Forwarding no X11DisplayOffset 10 PrintMotd no PrintLastLog yes TCPKeepAlive yes #UseLogin no #MaxStartups 10:30:60 #Banner /etc/issue.net Banner /etc/sshbanner.net # Allow client to pass locale environment variables AcceptEnv LANG LC_* Subsystem sftp /usr/lib/openssh/sftp-server # Set this to 'yes' to enable PAM authentication, account processing, # and session processing. If this is enabled, PAM authentication will # be allowed through the ChallengeResponseAuthentication and # PasswordAuthentication. Depending on your PAM configuration, # PAM authentication via ChallengeResponseAuthentication may bypass # the setting of "PermitRootLogin without-password". # If you just want the PAM account and session checks to run without # PAM authentication, then enable this but set PasswordAuthentication # and ChallengeResponseAuthentication to 'no'. UsePAM yes #specify which accounts can use SSH AllowUsers onlyme I've also configured my router's port forwarding table to include: LAN Ports: 2222-2222 Protocol: TCP LAN IP Address: "IP Address" displayed by viewing "connection information" from right-click menu of system tray Remote Ports[optional]: n/a Remote IP Address[optional]: n/a I've tried various other configurations as well, using primary and secondary dns, and also with specifying remote ports 2222-2222. I've also tried with TCP/UDP (actually two rules because my router requires separate rules for each protocol). With any router port forwarding configuration, I am able to log in with ssh -p 2222 -v localhost But, when I try to log in from school using ssh -p 2222 onlyme@IP_ADDRESS I get a "No route to host" message. Same thing when I use the "Broadcast Address" or "Default Route/Primary DNS". When I use the "subnet mask", ssh just hangs. However, when I use the "secondary DNS" I recieve a "Connection refused" message. :^( Someone please help me figure out how to make this work.

    Read the article

  • Ubuntu Linux won't display netbook's native resolution

    - by Daniel
    FYI: My Netbook model is HP Mini 210-1004sa, which comes with Intel Graphics Media Accelerator 3150, and has a display 10.1" Active Matrix Colour TFT 1024 x 600. I recently removed Windows 7 Starter from my netbook, and replaced it with Ubuntu 12.10. The problem is the OS doesn't seem to recognise the native display resolution of 1024x600 i.e. the bottom bits of Ubuntu is hidden beneath the screen & the only 2 available resolutions are: the default 1024x768 and 800x600. I've also thought about replacing Ubuntu with Lubuntu or Puppy Linux, as the system does run a bit slow, but I can't, as then I won't be able to access the taskbar and application menu which will be hidden beneath the screen. Only Ubuntu with Unity is currently usable, as the Unity Launcher is visible enough. I was able to define a custom resolution 1024x600 using the Q&A: How set my monitor resolution? but when I set that resolution, there appears a black band at the top of the screen and the desktop area is lowered, with bits of it hidden beneath the screen. I tried leaving it at this new resolution and restarting the system to see if the black band would disappear & the display will fit correctly, but it gets reset to 1024x768 at startup and displays following error: Could not apply the stored configuration for monitors none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1)

    Read the article

  • SOA &amp; Application Grid Specialization step 3 of 6 &ndash; Education Competence Center

    - by Jürgen Kress
    SOA & Application Grid Specialization step 3 of 6 – education competence center Dear Team In our fist step to become SOA Specialized & Application Grid Specialized we highlighted our OMM system to register your opportunities. In our second step we featured our marketing activities to create your reference cases and run joint marketing campaigns. In the third step we will focus on the education criteria: SOA Sales assessment & SOA Pre-Sales assessment & Support assessment. Steps: Login to Oracle Partner Network (support for login contact Partner Business Centers) Go to the OPN Competence Center Select the Oracle Service-Oriented Architecture 11g Sales Specialist (3 persons required) Click the play button to run the assessment Select the Oracle Service-Oriented Architecture 11g PreSales Specialist (3 persons required) Click the play button to run the assessment Select the Oracle Technology Support Specialist (1 person required) Click the play button to run the assessment Tips: · You can run the assessments as often as you like. After each try you will see your current score and correct answers to the questions. · During your next team meeting reserve an hour to become specialized jointly. · For the fist 5 partners who contact us we will order a pizza service to ensure the success of your team meeting! · We want your feedback to improve the assessments. If you find an ambiguous question or one with wrong context or even wrong answers, send us your feedback. The first 5 partners who will send us feedback will get a free competence center coffee cup! If you need to get an Oracle Partner Network Account please contact our Partner Business Centers.   For more information on Specialization please visit our OPN Specialized Webcast Series And become a member in our SOA Partner Community for registration please visit www.oracle.com/goto/ema/soa Jürgen Kress, SOA Partner Adoption EMEA Thanks for your efforts to become Specialized! SOA Specialized Application Grid Specialized Proof 2 transactions with OMM Proof 2 transactions with OMM Create your 2 references Create your 2 references SOA Sales assessment 3, Application Grid Sales Specialist 3 SOA Pre-Sales assessment 3 Application Grid PreSales Specialist 3 Support assessment 1 Support assessment 2 SOA Implementation assessment 4 Application Grid Implementation assessment 4 Technorati Tags: soa specialization Oracle Partner Network SOA Partner Community

    Read the article

  • Fuji camera "mounts" but folder not in Dolphin After Kubuntu 13.10 upgrade

    - by user207207
    Fuji camera mount reported in attached devices but not visible in Dolphin After Kubuntu 13.10 upgrade Have reinstalled the driver, and a few other suggestions, for other cameras mounts failing on previous Ubuntu upgrades. I have already spent a couple of hours trying to get my photo's off the camera, very annoying. Worked perfectly in 11.04, 11.10, 12.04, 12.10 and 13.04. dmesg | tail; lsusb; lsb_release -a [ 6181.858786] CPUM: APIC 03 at 00000000fee00000 (mapped at ffffc90009400000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396236] CPUM: APIC 00 at 00000000fee00000 (mapped at ffffc90000c6a000) - ver 0x80050010, lint0=0x10700 lint1=0x00400 pc=0x00400 thmr=0x10000 [17261.396239] CPUM: APIC 03 at 00000000fee00000 (mapped at ffffc90000c72000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396241] CPUM: APIC 02 at 00000000fee00000 (mapped at ffffc90000c70000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [17261.396255] CPUM: APIC 01 at 00000000fee00000 (mapped at ffffc90000c6e000) - ver 0x80050010, lint0=0x10700 lint1=0x10400 pc=0x00400 thmr=0x10000 [32456.884907] usb 2-5: new high-speed USB device number 2 using ehci-pci [32457.654046] usb 2-5: New USB device found, idVendor=04cb, idProduct=01e8 [32457.654050] usb 2-5: New USB device strings: Mfr=0, Product=2, SerialNumber=3 [32457.654052] usb 2-5: Product: Digital Camera [32457.654053] usb 2-5: SerialNumber: 4C3230302020091117CAA59WP18548 Bus 002 Device 002: ID 04cb:01e8 Fuji Photo Film Co., Ltd Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 003: ID 2013:024f PCTV Systems nanoStick T2 290e Bus 001 Device 002: ID 046d:082d Logitech, Inc. HD Pro Webcam C920 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub No LSB modules are available. Distributor ID: Ubuntu vissibleDescription: Ubuntu 13.10 Release: 13.10 Codename: saucy sudo apt-get install gvfs-bin gvfs-mount gphoto2://[usb:002,002] Error mounting location: Error initializing camera: -108: No such file or directory ...... I have reported a bug in Dolphin, which has been transferred to Solid. Further information : I ran solid-hardware list details udi = '/org/kde/solid/udev/sys/devices/pci0000:00/0000:00:04.1/usb2/2-5' parent = '/org/kde/solid/udev' (string) vendor = '04cb' (string) product = 'Digital Camera' (string) description = 'Camera' (string) Block.major = 189 (0xbd) (int) Block.minor = 137 (0x89) (int) Block.device = '/dev/bus/usb/002/010' (string) Camera.supportedProtocols = {'ptp'} (string list) Camera.supportedDrivers = {'gphoto'} (string list) I still can't get my photo's off, I can see the folders using the Gimp menu. If anyone has got any ideas, I'm willing to try them.

    Read the article

  • How far should an entity take care of its properties values by itself?

    - by Kharlos Dominguez
    Let's consider the following example of a class, which is an entity that I'm using through Entity Framework. - InvoiceHeader - BilledAmount (property, decimal) - PaidAmount (property, decimal) - Balance (property, decimal) I'm trying to find the best approach to keep Balance updated, based on the values of the two other properties (BilledAmount and PaidAmount). I'm torn between two practices here: Updating the balance amount every time BilledAmount and PaidAmount are updated (through their setters) Having a UpdateBalance() method that the callers would run on the object when appropriate. I am aware that I can just calculate the Balance in its getter. However, it isn't really possible because this is an entity field that needs to be saved back to the database, where it has an actual column, and where the calculated amount should be persisted to. My other worry about the automatically updating approach is that the calculated values might be a little bit different from what was originally saved to the database, due to rounding values (an older version of the software, was using floats, but now decimals). So, loading, let's say 2000 entities from the database could change their status and make the ORM believe that they have changed and be persisted back to the database the next time the SaveChanges() method is called on the context. It would trigger a mass of updates that I am not really interested in, or could cause problems, if the calculation methods changed (the entities fetched would lose their old values to be replaced by freshly recalculated ones, simply by being loaded). Then, let's take the example even further. Each invoice has some related invoice details, which also have BilledAmount, PaidAmount and Balance (I'm simplifying my actual business case for the sake of the example, so let's assume the customer can pay each item of the invoice separately rather than as a whole). If we consider the entity should take care of itself, any change of the child details should cause the Invoice totals to change as well. In a fully automated approach, a simple implementation would be looping through each detail of the invoice to recalculate the header totals, every time one the property changes. It probably would be fine for just a record, but if a lot of entities were fetched at once, it could create a significant overhead, as it would perform this process every time a new invoice detail record is fetched. Possibly worse, if the details are not already loaded, it could cause the ORM to lazy-load them, just to recalculate the balances. So far, I went with the Update() method-way, mainly for the reasons I explained above, but I wonder if it was right. I'm noticing I have to keep calling these methods quite often and at different places in my code and it is potential source of bugs. It also has a detrimental effect on data-binding because when the properties of the detail or header changes, the other properties are left out of date and the method has no way to be called. What is the recommended approach in this case?

    Read the article

  • Speed up your Silverlight Debugging for large projects

    - by Aligned
    I'm working on a 5+ year old ASP.NET project that has 74+ projects and we've been adding new Silverlight applications to run in the ASP.NET page islands. My machine at work isn't the most powerful, so I find myself waiting a lot for the whole thing to build. I'm using Visual Studio 2010, so that takes up a lot of resources as well. This causes me to get distracted and I start looking at the news... I need to combat that more :-). I can't get a new machine, that's up to someone else, so I've found a few tricks to help. 1. Only build the Silverlight project you're working with. This will build all referenced projects (you can see these by right clicking and clicking Project Dependencies) and package a new XAP (you can see all the actions in your output build window). Then refresh your page with the Silverlight app and it's up-to-date. 2. I was working with a co-worker (thanks Jordan) who was using the the Debug -> attach to processes window. In the Attach to: row there is a "Select..." button. In the dialog, click "Debug these code types:" and select Silverlight. Hit ok. Then all you need to do is find your process (you might need to click the refresh button). I'm usually debugging in IE, so I select the first one and push "i" on the keyboard. That brings me to the IE windows open. Find the one with type of Silverlight, x86. It is usually directly above one with type of x86 that has the page title for "title". Click attach and watch your output window spit out messages about loading debug symbols and your breakpoints enabled (if this doesn't happen you chose the wrong process, hit stop and try again). Now you can debug the client code as normal, server code requires a full F5 or attaching to the correct process. To improve this even further, bind the menu item to a key stroke. I chose ctrl + x, x. (Tools -> Options -> Keyboard, search for Debug.AttachToProcess, set the shortcut keys globaly and assign). Most of the time I build the project, then hit ctrl + x, x then i, then enter and I'm debugging. The process I want is usually the first IE in the list.

    Read the article

  • “It Isn’t Easy At All; Otherwise, Everyone Would Be Doing It”

    - by Kathryn Perry
    A few months ago, JP Saunders (pictured left), who leads the go-to-market initiatives for the Oracle CX Service offering, kicked off a series of articles about modern customer service. He contends that to take care of customers?and the people that support those customers?companies need to make it easy to deliver consistently great experiences. But it’s not easy; it’s an art. The six posts in The Art of Easy series will help you better understand some of the customer service challenges you face and how to avoid common pitfalls. We pulled them all together here in one post for continuity and easy access. Saunders introduces the series with The Art of Easy: Make It Easy To Deliver Great Customer Service Experiences (Part 1). The Art of Easy: Offer Self Service With the Emphasis on Service (Part 2) by David Fulton (pictured left): David Fulton, Director of Product Management, Oracle Service Cloud, shares five tenets of customer self service that move an organization closer to becoming a modern customer service business. Easy Decisions For Complex Problems (Part 3) by Heike Lorenz (pictured right): Heike Lorenz, Director of Global Product Marketing, Policy Automation, writes about automating service policies to ensure that the correct decisions are being applied to the right people. The goal is to nurture the trusted relationships with customers during complex decision-making processes. Moving at the Speed of Easy (Part 4) by Chris Ulmand (pictured left): Chris Omland, Director of Product Management, Oracle Service Cloud, addresses the need for speed to keep up with customers’ expectations. His advice—start with a platform that enables agile innovation, respects a company’s unique needs, and has proven reliability to protect customer relationships. Knowledge Makes It Easy For Everyone (Part 5) by Nav Chakravarti (pictured rig: Vice President Nav Chakravarti, Oracle Service Cloud, talks about managing the knowledge that customers need and want. He coaches readers on delivering answers to customers’ questions easily, in context, with relevance, reliably, and accurately. Making Easy, Both Effective and Efficient (Part 6) by Melinda Uhland (pictured left): Melinda Uhland, Oracle CX Product Management teaches us that happy agents produce happy customers. A Modern Customer Service organization is one that invests in its agents and empowers them with tools to make them efficient and effective, which, in turn, improves customer results.

    Read the article

  • Toshiba wireless is blocked

    - by zorrillo
    I have the same problem, a 32 bits toshiba nb255, it first had the windows 7 bu next I installed the ubuntu 11.04. The wifi does not turn on. I used the following issues the commands rfkill unblock wlan0, sudo ifconfig wlan0 down; they were not able. by setting up the bios in advanced menu, the wireless communication sw in ON, but it did not work also. neither the utilities toshiba nor utilities of ubuntu (wicd, wifi radar). by using gedit to file group, nothing. by installing madwifi packet, nothing. by exporting the wifi driver from windows to ubuntu, by means of the NDISwrapper packet, neither. I put the current scripts of the ouput of the cli root@zorrillo:~# rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: yes root@zorrillo:~# sudo lspci | grep Atheros 07:00.0 Network controller: Atheros Communications Inc. AR9285 Wireless Network Adapter (PCI-Express) (rev 01) root@zorrillo:~# root@zorrillo:~# ifconfig -a eth0 Link encap:Ethernet direcciónHW 88:ae:1d:47:df:e1 ACTIVO DIFUSIÓN MULTICAST MTU:1500 Métrica:1 Paquetes RX:0 errores:0 perdidos:0 overruns:0 frame:0 Paquetes TX:0 errores:0 perdidos:0 overruns:0 carrier:0 colisiones:0 long.colaTX:1000 Bytes RX:0 (0.0 B) TX bytes:0 (0.0 B) Interrupción:43 Dirección base: 0xe000 lo Link encap:Bucle local Direc. inet:127.0.0.1 Másc:255.0.0.0 Dirección inet6: ::1/128 Alcance:Anfitrión ACTIVO BUCLE FUNCIONANDO MTU:16436 Métrica:1 Paquetes RX:8 errores:0 perdidos:0 overruns:0 frame:0 Paquetes TX:8 errores:0 perdidos:0 overruns:0 carrier:0 colisiones:0 long.colaTX:0 Bytes RX:480 (480.0 B) TX bytes:480 (480.0 B) ppp0 Link encap:Protocolo punto a punto Direc. inet:189.203.115.236 P-t-P:192.168.226.1 Másc:255.255.255.255 ACTIVO PUNTO A PUNTO FUNCIONANDO NOARP MULTICAST MTU:1500 Métrica:1 Paquetes RX:6384 errores:30 perdidos:0 overruns:0 frame:0 Paquetes TX:6893 errores:0 perdidos:0 overruns:0 carrier:0 colisiones:0 long.colaTX:3 Bytes RX:5473081 (5.4 MB) TX bytes:974316 (974.3 KB) wlan0 Link encap:Ethernet direcciónHW 00:26:4d:c3:d0:44 DIFUSIÓN MULTICAST MTU:1500 Métrica:1 Paquetes RX:0 errores:0 perdidos:0 overruns:0 frame:0 Paquetes TX:0 errores:0 perdidos:0 overruns:0 carrier:0 colisiones:0 long.colaTX:1000 Bytes RX:0 (0.0 B) TX bytes:0 (0.0 B) note.-it is logical all the counters are cero if the wireless device is down root@zorrillo:/home/zorrillo/Descargas/802BGA# ifconfig wlan0 up SIOCSIFFLAGS: Operation not possible due to RF-kill root@zorrillo:/home/zorrillo/Descargas/802BGA# It would seem the wireless switching does not react with whichever ubuntu 11.04 command (I hope to be wrong). The target remains the same, in order of the scripts above. I am worried, I have tried to find any answer for days and nights. Toshiba does not supply drivers or soft support for linux, marketing of course. I only see the device is up by protocols and down phisically, My question is, is it possible to enable or not shutdown physically the device?, because in the toshiba model nb255 the wifi is not set up/down bye means of a physical switch, but by means a combination of Fn + F8 (only for windows 7, no one more), Is there one possibility to configure the hot keys in ubuntu?

    Read the article

  • Android - Force Close - Null Pointer on Canvas?

    - by user22241
    Please bear with me. I have a very odd problem. Basically, my app so far, has 3 activities (a main splash screen, an 'options/menu' screen and the main app). If I follow the very specific steps oulined below, I get a 'null pointer exception' in the 2nd activity) and the app force closes...... Here are the steps: Start the app (a game based on Surfaceview), tap through to the third activity so the game is running, then hit the home key so the game is paused and put to the background, the activity/app is ended through DDMS in the SDK then restarted on the device (all OK so far), now if I hit the back key on the device twice in quick succession, it happens. All other sequence of events is fine, even to the point of pressing the back key, waiting for the previous activity to show, then hitting back again - all OK. Only when the back key is pressed twice in quick succession following all the above steps does the problem occur. I'm assuming that the canvas isn't ready as it's showing as 'null' when this happens, but I'm not sure why this is happening as surely it's happening when I'm trying to go back to activity 1, but the logcat shows the error in activity 2. if I stop the activity running my 'doDraw' method (which referenced the canvas), then all is OK - so I can safely assume it is the canvas causing the problem. Also, if I skip my first activity (which is a very basic full-screen button which just displays a splashscreen and waits for the user to tap the screen), and make my 2nd activity the launch activity, again, it is OK. this is the part of the code that I think is probably relevant: @Override public void surfaceChanged(SurfaceHolder arg0, int arg1, int arg2, int arg3) { vheight = this.getHeight(); vwidth = this.getWidth(); } @Override public void surfaceCreated(SurfaceHolder holder) { vheight = this.getHeight(); vwidth = this.getWidth(); this.viewWidth = vwidth; this.viewHeight = vheight; if (runthread==false){ if (preThread.getState()==Thread.State.TERMINATED){ preThread = new OptionsThread(thisholder, thiscontext, thishandler); } preThread.setRunning(true); preThread.start();} } @Override public void surfaceDestroyed(SurfaceHolder holder) { preThread.setRunning(false); //Stop the loop boolean retry = true; //Stop the thread while (retry) { try { preThread.join(); retry = false; } catch (InterruptedException e) { } } Thank you all for any help you can offer

    Read the article

  • Upgrading Oracle Enterprise Manager: 12c to 12c Release 2

    - by jorge_neidisch
    1 - Download the OEM 12c R2. It can be downloaded here: http://www.oracle.com/technetwork/oem/grid-control/downloads/index.html  Note: it is a set of three huge zips. 2 - Unzip the archives 3 - Create a directory (as the oem-owner user) where the upgraded Middleware should be installed. For instance: $ mkdir /u01/app/oracle/Middleware12cR2 4 - Back up OMS (Middleware home and inventory), Management Repository and Software Library. http://docs.oracle.com/cd/E24628_01/doc.121/e24473/ha_backup_recover.htm#EMADM10740 5 - Ensure that the Management tables don't have snapshots:  SQL> select master , log_table from all_mview_logs where log_owner='<EM_REPOS_USER>If there are snapshots drop them:  SQL> Drop snapshot log on <master> 6 - Copy emkey  from existing OMS:  $ <OMS_HOME>/bin/emctl config emkey -copy_to_repos [-sysman_pwd <sysman_pwd>]To verify whether the emkey is copied, run the following command: $ <OMS_HOME>/bin/emctl status emkeyIf the emkey is copied, then you will see the following message:The EMKey  is configured properly, but is not secure.Secure the EMKey by running "emctl config emkey -remove_from_repos". 7 - Stop the OMS and the Agent $ <OMS_HOME>/bin/emctl stop oms $ <AGENT_HOME>/bin/emctl stop agent 8 - from the unzipped directory, run $ ./runInstaller 8a - Follow the wizard: Email / MOS; Software Updates: disable or leave empty. 8b - Follow the wizard:  Installation type: Upgrade -> One System Upgrade. 8c - Installation Details: Middleware home location: enter the directory created in step 3. 8d - Enter the DB Connections Details. Credentials for SYS and SYSMAN. 8e - Dialog comes: Stop the Job Gathering: click 'Yes'. 8f- Warning comes: click 'OK'. 8g - Select the plugins to deploy along with the upgrade process 8h- Extend Weblogic: enter the password (recommended, the same password for the SYSMAN user). A new directory will be created, recommended: /u01/app/oracle/Middleware12cR2/gc_inst 8i - Let the upgrade proceed by clicking 'Install'. 8j - Run the following script (as root) and finish the 'installation':  $ /u01/app/oracle/Middleware12R2/oms/allroot.sh 9 - Turn on the Agent:  $ <AGENT_HOME>/bin/emctl start agent  Note that the $AGENT_HOME might be located in the old Middleware directory:  $ /u01/app/oracle/Middleware/agent/agent_inst/bin/emctl start agent 10 - go to the EM UI. Select the WebLogic Target and choose the option "Refresh WebLogic Domain" from the menu. 11 - Update the Agents: Setup -> Manage Cloud Control -> Upgrade Agents -> Add (+) Note that the agents may take long to show up. ... and that's it! Or that should be it !

    Read the article

  • What is the recommended way to output values to FBO targets? (OpenGL 3.3 + GLSL 330)

    - by datSilencer
    I'll begin by apologizing for any dumb assumptions you might find in the code below since I'm still pretty much green when it comes to OpenGL programming. I'm currently trying to implement deferred shading by using FBO's and their associated targets (textures in my case). I have a simple (I think :P) geometry+fragment shader program and I'd like to write its Fragment Shader stage output to three different render targets (previously bound by a call to glDrawBuffers()), like so: #version 330 in vec3 WorldPos0; in vec2 TexCoord0; in vec3 Normal0; in vec3 Tangent0; layout(location = 0) out vec3 WorldPos; layout(location = 1) out vec3 Diffuse; layout(location = 2) out vec3 Normal; uniform sampler2D gColorMap; uniform sampler2D gNormalMap; vec3 CalcBumpedNormal() { vec3 Normal = normalize(Normal0); vec3 Tangent = normalize(Tangent0); Tangent = normalize(Tangent - dot(Tangent, Normal) * Normal); vec3 Bitangent = cross(Tangent, Normal); vec3 BumpMapNormal = texture(gNormalMap, TexCoord0).xyz; BumpMapNormal = 2 * BumpMapNormal - vec3(1.0, 1.0, -1.0); vec3 NewNormal; mat3 TBN = mat3(Tangent, Bitangent, Normal); NewNormal = TBN * BumpMapNormal; NewNormal = normalize(NewNormal); return NewNormal; } void main() { WorldPos = WorldPos0; Diffuse = texture(gColorMap, TexCoord0).xyz; Normal = CalcBumpedNormal(); } If my render target textures are configured as: RT1:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE0, GL_COLOR_ATTACHMENT0) RT2:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE1, GL_COLOR_ATTACHMENT1) RT3:(GL_RGB32F, GL_RGB, GL_FLOAT, GL_TEXTURE2, GL_COLOR_ATTACHMENT2) And assuming that each texture has an internal format capable of contaning the incoming data, will the fragment shader write the corresponding values to the expected texture targets? On a related note, do the textures need to be bound to the OpenGL context when they are Multiple Render Targets? From some Googling, I think there are two other ways to output to MRTs: 1: Output each component to gl_FragData[n]. Some forum posts say this method is deprecated. However, looking at the latest OpenGL 3.3 and 4.0 specifications at opengl.org, the core profiles still mention this approach. 2: Use a typed output array variable for the expected type. In this case, I think it would be something like this: out vec3 [3] output; void main() { output[0] = WorldPos0; output[1] = texture(gColorMap, TexCoord0).xyz; output[2] = CalcBumpedNormal(); } So which is then the recommended approach? Is there a recommended approach at all if I plan to code on top of OpenGL 3.3? Thanks for your time and help!

    Read the article

  • Deploying, but without those pesky test files!

    - by Chris Skardon
    Silverlight testing is great, we all know that (don’t we??), we’re expected to do it as part of the development process, but once we’ve got an awesome application written and we come to deploy it, we don’t want the test files going out with it… You might be like me, have the files in a Web project – let’s face it, that’s how we’re pushed into doing it… So let’s stick with it! Now. I’m deploying via the wonders of the Web Deployment shizzle, but this also applies to the classic ‘installer’ project as well.. Baaaasically, we’re going to use the ‘Debug’ / ‘Release’ configurations to include given files. ?? OK, you know in the top of your visual studio editor, you (usually) have a drop down which predominantly reads ‘Debug’? Those are ‘configurations’. Mostly we don’t bother changing it, primarily due to laziness, but also the fact that we generally don’t see ‘Release’ as actually doing anything other than making it harder to find problems :) Well today my friends we’re going to change that bad boy… The next few steps are just helping you set up a new ‘Debug’ configuration, but you can just switch to the ‘Release’ configuration and skip to the end… First let’s go to the Configuration Manager. There are multiple ways, through the ‘Build’ menu (at the bottom), or via the drop down which currently has ‘Debug’ in it :) Got it? Select ‘New’ from the ‘Active solution configuration’ drop down: Create a new configuration, kind of like the picture below shows (or for those graphically challenged – Name: DebugWithNoTests, and Copy settings from: ‘Debug’, ensuring the ‘Create new project configurations’ checkbox is checked). Press OK. VS will do some shizzle, and in the Configuration manager, you will see pretty much exactly what you did before, only with ‘Debug’ replaced with ‘DebugWithNoTests’. Turn off the build options for the test projects. We won’t need them.. IF you skipped down from the top, this is where you’ll be wanting to stop!!! Close and now we’re one notepad step away from achieving our goals. Yes, I said notepad. You can’t do what we’re going to do in VS. (Pity). Go to the folder where your web project is, and right click on the ‘.csproj’ file. Now open it with notepad. Head on down to the ‘<Content Include’ bits, they’ll look like this: <ItemGroup> <Content Include="ClientBin\Tests.xap" /> ... </ItemGroup> Take this and modify each of the files you don’t want deployed and change to: <Content Include="ClientBin\Tests.xap" Condition="'$(Configuration)' == 'Debug'" /> Once you’ve got that sorted publish your project, once with the Debug configuration selected, and another with any other configuration (‘Release’, ‘DebugWithNoTests’ etc).. No files! Huzzah!

    Read the article

  • Unable to boot into 11.10 in normal (get a black screen) OR recovery (get just a flashing cursor, no cli)

    - by user1092284
    Okay, so I had a dual boot of Tango Studio (based on Ubuntu 10.04) and Windows XP. Yesterday I downloaded the .iso for Ubuntu 11.10 and attempted to install from a USB (my BIOS won't normally boot from USB but I had PLOP boot manager on a CD). I booted up Ubuntu from the USB and then from there formatted the partition with Tango on and installed Ubuntu 11.10. On booting up I came into Grub rescue mode. So I booted up from the USB again and used boot-repair to reinstall Grub. After this I would see the normal Grub menu, but on choosing Ubuntu I would come to a black screen. On choosing recovery mode it would begin starting normally with no obvious errors but instead of coming to a cli I would just get a blank screen with a flashing cursor on the top left, not accepting any input. I have since reformatted and reinstalled from a CD rather than USB and had the exact same problem. I used boot-repair again and the result is the same. Output of the most recent boot-repair is at http://paste.ubuntu.com/869805/ I have also tried editing the ubuntu grub entry and replacing quiet with text nomodeset as I saw in an answer to another question. This got me a bit further - I saw the purple ubuntu loading screen but still came to a blank screen after that. Anyway, in most of the other questions in which that is brought up the user is still able to boot into recovery, while I am not. Can anyone help? Thanks in advance, let me know if there's any more info I need to provide! EDIT FOR MORE INFO: I read something saying it's quiet splash that should be replaced with nomodeset. Earlier I had left splash in the line. So i tried it this way and it froze after displaying the following text: fsck from util-linux 2.19.1 mountall: Plymouth command failed mountall: Disconnected from Plymouth /dev/sda5:clean, 139359/1741488 files, 745830/6961125 blocks From a bit of googling it doesn't look like plymouth is essential, but I've checked and I do have the most current versions of mountall and plymouth installed so I don't know why there's a problem EDIT FOR MORE INFO AGAIN: I used dkpg --reconfigure plymouth cause I saw it mentioned in another forum and it still says plymouth command failed on boot

    Read the article

  • Developing web sites that imitate desktop apps. How to fight that paradigm? [closed]

    - by user1598390
    Supposse there's a company where web sites/apps are designed to resemble desktop apps. They struggle to add: Splash screens Drop-down menus Tab-pages Pages that don't grow downward with content, context is inside scrollable area so page is of a fixed size, as if resembling the one-screen limitation of desktop apps. Modal windows, pop-ups, etc. Tree views Absolutely no access to content unless you login-first, even with non-sensitive content. After splash screen desapears, you are presented with a login screen. No links - just simulated buttons. Fixed page-size. Cannot open a linked in other tab Print button that prints directly ( not showing printable page so the user can't print via the browser's print command ) Progress bars for loading content even when the browser indicates it with its own animation Fonts and color amulate a desktop app made with Visual Basic, PowerBuilder etc. Every app seems almost as if were made in Visual Basic. They reject this elements: Breadcrumbs Good old underlined links Generated/dynamic navigation, usage-based suggestions Ability to open links in multiple tabs Pagination Printable pages Ability to produce a URL you can save or share that links to an item, like when you send someone the link to an especific StackExchange question. The only URL is the main one. Back button To achieve this, tons of javascript code is needed. Lots and lots of Javascript and Ajax code for things not related with the business but with the necessity to hide/show that button, refresh this listbox, grey-out that label, etc. The coplexity generated by forcing one paradigm into another means most lines of code are dedicated to maintain the illusion of a desktop app. What is the best way to change this mindset, and make them embrace the web, and start producing modern, web apps instead of desktop imitations ? EDIT: These sites are intranet sites. Users hate these apps. They constantly whine about them, but they have to use them to do their daily work. These sites are in-house solutions, the end-users have no choice but to use them. They are a "captive audience". Also, substitution will not happen because of high costs. But at least if that mindset is changed, new developments would be more web-like.

    Read the article

  • Summary of our Recent Pull Request Enhancements on CodePlex

    Over the past several weeks, we’ve been incrementally rolling out a bunch of enhancements around our pull request workflow for Git and Mercurial projects. Our goal is to make contributing to open source projects a simple and rewarding experience, and we’ll continue to invest in this area. Here’s a summary of the changes so far, in case you’ve missed them. As always, if you have any feedback, please let us know, whether on our ideas page or via Twitter. Support for branches You can now pick the source and destination branches for your pull request, whether you’re sending one from your fork, or using it within a project to collaborate with your other trusted contributors. A redesigned creation experience Our old pull request creation form was rather lacking. It asked for a title and comment in a small modal dialog, but that was about it. We knew we could do better, so we rethought the experience. Now, when you create a pull request, you’re taken to a new page that let’s you select the source and destination, and gives you information on the diffs and commits that you’re sending, so you can confirm that you’re sending the right set of changes. Inline code snippets in discussion If users comment on code in your pull request, we now display a preview of the snippet of relevant code inline with their comment on the discussion. Subsequent replies on that line are combined in a single thread to preserve your context. No more clicking and hunting to find where the comments are. And you can add another inline comment right from the discussion area. Comment notifications You can now elect receive an e-mail notification if a user comments on your pull request. If it’s on a line of code, we’ll display the relevant code snippet in the e-mail. Redesigned diff viewer Our old diff viewer hadn’t been touched in a while, and was in need of an update. We started with a visual facelift to use standard red/green colors for additions/deletions and remove the noisy “dots” that represented spaces and that littered the diff viewer. Based on feedback that the viewable region for diffs was too small, especially for smaller screen resolutions, we revamped the way the viewport for the code is sized, and now expand it to fill the majority of the browser height when scrolling down. The set of improvements we implemented here also apply anywhere diffs are viewed, not just for pull requests.

    Read the article

  • [EF + Oracle] Inserting Data (Sequences) (2/2)

    - by JTorrecilla
    Prologue In the previous chapter we have see how to create DB records with EF, now we are going to Some Questions about Oracle.   ORACLE One characteristic from SQL Server that differs from Oracle is “Identity”. To all that has not worked with SQL Server, this property, that applies to Integer Columns, lets indicate that there is auto increment columns, by that way it will be filled automatically, without writing it on the insert statement. In EF with SQL Server, the properties whose match with Identity columns, will be filled after invoking SaveChanges method. In Oracle DB, there is no Identity Property, but there is something similar. Sequences Sequences are DB objects, that allow to create auto increment, but there are not related directly to a Table. The syntax is as follows: name, min value, max value and begin value. 1: CREATE SEQUENCE nombre_secuencia 2: INCREMENT BY numero_incremento 3: START WITH numero_por_el_que_empezara 4: MAXVALUE valor_maximo | NOMAXVALUE 5: MINVALUE valor_minimo | NOMINVALUE 6: CYCLE | NOCYCLE 7: ORDER | NOORDER 8:    How to get sequence value? To obtain the next val from the sequence: 1: SELECT nb_secuencia.Nextval 2: From Dual Due to there is no direct way to indicate that a column is related to a sequence, there is several ways to imitate the behavior: Use a Trigger (DB), Use Stored Procedures or Functions(…) or my particularly option. EF model, only, imports Table Objects, Stored Procedures or Functions, but not sequences. By that, I decide to create my own extension Method to invoke Next Val from a sequence: 1: public static class EFSequence 2: { 3: public static int GetNextValue(this ObjectContext contexto, string SequenceName) 4: { 5: string Connection = ConfigurationManager.ConnectionStrings["JTorrecillaEntities2"].ConnectionString; 6: Connection=Connection.Substring(Connection.IndexOf(@"connection string=")+19); 7: Connection = Connection.Remove(Connection.Length - 1, 1); 8: using (IDbConnection con = new Oracle.DataAccess.Client.OracleConnection(Connection)) 9: { 10: using (IDbCommand cmd = con.CreateCommand()) 11: { 12: con.Open(); 13: cmd.CommandText = String.Format("Select {0}.nextval from DUAL", SequenceName); 14: return Convert.ToInt32(cmd.ExecuteScalar()); 15: } 16: } 17:  18: } 19: } This Object Context’s extension method are going to invoke a query with the Sequence indicated by parameter. It takes the connection strings from the App settings, removing the meta data, that was created by VS when generated the EF model. And then, returns the next value from the Sequence. The next value of a Sequence is unique, by that, when some concurrent users are going to create records in the DB using the sequence will not get duplicates. This is my own implementation, I know that it could be several ways to do and better ways. If I find any other way, I promise to post it. To use the example is needed to add a reference to the Oracle (ODP.NET) dll.

    Read the article

  • Optionally Running SPSecurity.RunWithElevatedPrivileges with Delgates

    - by Damon Armstrong
    I was writing some SharePoint code today where I needed to give people the option of running some code with elevated permission.  When you run code in an elevated fashion it normally looks like this: SPSecurity.RunWithElevatedPrivileges(()=> {     //Code to run }); It wasn’t a lot of code so I was initially inclined to do something horrible like this: public void SomeMethod(bool runElevated) {     if(runElevated)     {         SPSecurity.RunWithElevatedPrivileges(()=>         {             //Code to run         });     }     else     {         //Copy of code to run     } } Easy enough, but I did not want to draw the ire of my coworkers for employing the CTRL+C CTRL+V design pattern.  Extracting the code into a whole new method would have been overkill because it was a pretty brief piece of code.  But then I thought, hey, wait, I’m basically just running a delegate, so why not define the delegate once and run it either in an elevated context or stand alone, which resulted in this version which I think is much cleaner because the code is only defined once and it didn’t require a bunch of extra lines of code to define a method: public void SomeMethod(bool runElevated) {     var code = new SPSecurity.CodeToRunElevated(()=>     {         //Code to run     });     if(runElevated)     {         SPSecurity.RunWithElevatedPermissions(code);         }     else     {         Code();     } }

    Read the article

< Previous Page | 476 477 478 479 480 481 482 483 484 485 486 487  | Next Page >