Search Results

Search found 27016 results on 1081 pages for 'entry point'.

Page 606/1081 | < Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >

  • Why isn't one of the constant buffers being loaded inside the shader?

    - by Paul Ske
    I however got the model to load under tessellation; only problem is that one of the constant buffers aren't actually updating the shader's tessellation factor inside the hullshader. I created a messagebox at the rendering point so I know for sure the tessellation factor is assigned to the dynamic constant buffer. Inside the shader code where it says .Edges[1] = tessellationAmount; the tessellationAmount is suppose to be sent from the dynamic buffer to the shader. Otherwise it's just a plain box. In better explanation; there's a matrixBuffer, cameraBuffer, TessellationBuffer for constant. There's a multiBuffer array that assigns the matrix, camera, tesselation. So, when I set the Hull Shader, PixelShader, VertexShader, DomainShader it gets assigned by the multibuffer. E.G. devcon-HSSetConstantBuffers(0,3,multibuffer); The only way around the whole ideal would be to go in the shader and change how much the edges tessellate and inside the edges as well with the same number. My question is why wouldn't the tessellationBuffer not work in the shader?

    Read the article

  • Take a Tour of the Future

    - by Tom Caldecott-Oracle
    Visit Our HQ Usability Lab During Oracle OpenWorld 2014 You want to look behind the scenes at the Oracle Applications User Experience Usability Lab on the campus of our headquarters. No problem. You’re invited to join an exclusive tour. When? Thursday, October 2 or Friday, October 3. Where? Redwood Shores, Calif.  And what will you see on the tour? The future—how we test future product designs and the advanced technology we use to do that. You’ll also view early demos of upcoming enterprise software designs for tables and mobile phones.  We’ll provide round-trip transportation, with the pickup and drop-off point being the InterContinental San Francisco.  Space is limited, so reserve your spot now. Want to know more about the tour and other Oracle Applications User Experience activities at Oracle OpenWorld? Visit UsableApps. Welcome to the future.

    Read the article

  • Can not find the source of Grant permission on a folder

    - by Konrads
    I have a security mystery :) Effective permissions tab shows that a few sampled users (IT ops) have any and all rights (all boxes are ticked). The permissions show that Local Administrators group has full access and some business users have too of which the sampled users are not members of. Local Administrators group has some AD IT Ops related groups of which the sampled users, again, appear not be members. The sampled users are not members of Domain Administrators either. I've tried tracing backwards (from permissions to user) and forwards (user to permission) and could not find anything. At this point, there are two options: I've missed something and they are members of some groups. There's another way of getting full permissions. Effective Permissions are horribly wrong. Is there a way to retrieve the decision logic of Effective Permissions? Any hints, tips, ideas?

    Read the article

  • Can the JVM(Oracle) run into an OutOfMemory error if the heap size is below the max?

    - by user439407
    I am running a Tomcat site(with an NGinx front end) that seems to be randomly running out of memory even though the max heap size is pretty large. My question is is it possible for the JVM to get an OutOfMemory error even if the heap size is significantly less than -Xmx? For instance, here is a snapshot I took just 15 seconds before an OutOfMemory error: Tue Dec 18 23:13:28 JST 2012 Free memory: 162.31 MB Total memory: 727.75 MB Max memory: 3808.00 MB I guess theoretically it's possible that my code generated 3 gigs worth of objects in 15 seconds, but I highly doubt it. It seems like the JVM was unable to grow the heap even though it theoretically had room....Is it possible that other processes started using memory to the point that the JVM could not grow? I am running 64-bit Oracle Hotspot on a 64 bit vm running CentOS 5 with 6 gigs of ram.

    Read the article

  • Multiple Issues - USB booting Ubuntu 12.04

    - by Pixelishus
    I've been using a Ubuntu 12.04 off a bootable USB stick so I can have a portable OS. Also, OS variety. I've been using it for a while and I'm still have a few issues. First and most annoying, it often lags/freezes. I'm not sure what word you would use for it, windows will often stop working and go dark/not responding from anywhere between 10 seconds to a full minute and on some occasions, even longer. However, the window eventually starts working again. Similarly, sometimes the whole system will freeze, not just the window. The mouse will still move, but nothing will work. No clicking, no menus, not keyboard shortcuts. Again, it will usually start working. I'm liking Ubuntu a lot, but those issues can make it annoying to use sometimes. For example, it will ALWAYS freeze at some point if I try to watching a Youtube video and I'll have to wait for a minute or so until it starts responding, again. Aside from the lag/freezing, anytime I download packages, it will always say "package operation failed" when it's done, though it does seem to download/install. Another issue I'm having is with shutting down. If I open the logout/shutdown menu and click shutdown, it just logs me out and takes me to the login screen. If I try shutting down through the login screen, it won't do anything. As if I didn't even go to it. I've been using the terminal to reboot or shutdown when I need to. I've looked around for answers for all of these problems and still have yet to find a solution that works. Are these just normal issues with USB booting? I haven't installed Ubuntu to any computers, I've always done USB booting.

    Read the article

  • How to update debian dns server? New VM with same hostname as old VM

    - by opensourcechris
    We run several linux VM's on our Hyper-V cluster. Our old IT manager configured the dns server to resolve the url 'devlabs.ourdomain.com' to a debian squeeze apache webserver hosted on the hyper v cluster with the hostname: devlabs. We recently created a new Ubuntu vm to replace the original squeeze vm. When we created the new Ubuntu VM we used the same hostname of 'devlabs" to name the new VM. My problem is that now I am only able to access the new Ubuntu VM by using the IP address. How can I update our DNS server to point the url 'devlabs.ourdomain.com' to the new VM?

    Read the article

  • Remote access and local access same hostname

    - by cpf
    Hi serverfault, I have a server in a clients network, seperated from theirs with a router/firewall, the intention is to have this server available through one hostname (example.com) My idea is to have (at least) a DNS server in the outside, to have outside (out of the clients' network) access the internal server. The problem would at that point be the internal client (PC A) My question: What would I have to do to make something like this work? Is it even possible or already done? The goal is to not have to change anything on either PC A or PC B, while both should access the same "internal server" while surfing to "example.com" Perhaps adding logic to the DNS server would work (Detect the external IP of internal client [PC A] is the same as the IP for example.com - Give the local IP as reply?) Anyhow: Thanks for helping me think on this!

    Read the article

  • Service Catalogs for Database as a Service

    - by B R Clouse
    At the end of last month, I had the opportunity to present a speaking session at Oracle OpenWorld: Database as a Service: Creating a Database Cloud Service Catalog.  The session was well-attended which would have surprised me several months ago when I started researching this topic.  At that time, I thought of service catalogs as something trivial which could be explained in a few simple slides.  But while looking at all the different options and approaches available, I came to learn that designing a succinct and effective catalog is not a trivial task, and mistakes can lead to confusion and unintended side effects.  And when the room filled up, my new point of view was confirmed. In case you missed the session, or were able to attend but would like more details, I've posted a white paper that covers the topics from the session, and more.  We start with an overview of the components of a service catalog: And then look at several customer case studies of service catalogs for DBaaS.  Synthesizing those examples, we summarize the main options for defining the service categories and their levels.  We end with a template for defining Bronze | Silver | Gold service tiers for Oracle Database Services. The paper is now available here - watch for updates as we work to expand some sections and incorporate readers' feedback (hint - that includes your feedback). Visit our OTN page for additional Database Cloud collateral.

    Read the article

  • Keyboard not working 100% after Ubuntu 13.10 upgrade

    - by Marky
    If this has already been asked, my apologies, I did not find it before writing this. So please do point me to the correct page. Anyway, I have this weird issue on my laptop right now. The keyboard is not functioning 100%. This means, I can type my login details to get into Ubuntu. I can type something on Dash. But other than this (on the desktop), no output from the keyboard when using all the other apps - as in I start to type and nothing comes out. The surprising thing is that when I shift to Guest session, the keyboard functions normally. When I shift to another TTY, like Alt+F5, keyboard works normally. This is the first time I've encountered this so far in my use of Linux. Keyboards normally never fail on any of the desktop environments I've used over the years. Any ideas what's happening? Could be the config files on my home is too messy already. I've upgraded this from 11.10 to 13.04, then now 13.10 without a re-install. Works fine so far, until now that I can't do much without a keyboard. Thanks in advance! P.S. Mouse and touchpad works fine.

    Read the article

  • setting up my own name server

    - by mmokh
    I'm in the process of setting up my own name servers using BIND9, however I want to visualize the name server setup in relation to registrars and other name servers. Say I have a domain www.mydomain.com I setup my 2 name servers: ns1.mydomain.com - 192.168.0.1 ns2.mydomain.com - 192.168.0.2 1) How does the world know that my name servers are now at ns1.mydomain and ns2.mydomain. I read about setting up glue records at my registrar. Could you please elaborate on this, i.e. once i setup these glue records, can I now use my name servers in NS records for any other domain? For e.g. NS records for www.otherdomain.com - ns1.mydomain.com/ns2.mydomain.com 2) Given I setup the glue records as mentioned above, do I "have to" update mydomain.com NS records to point to my name servers? Can I keep mydomain.com NS records pointing to my registrars name servers, however use ns1.mydomain.com/ns2.mydomain.com as name servers for any other domain I own? Thanks

    Read the article

  • Is there any chance that my data will get silently corrupted with a robocopy SMB network transfer?

    - by Archagon
    I'm setting up a NAS box for the first time. At the moment, I have most of my data backed up to a few local hard drives, and I intend to transfer all the data to my NAS over ethernet once the RAID array is setup. Since this is all happening over the network, I'm a bit worried about my data getting corrupted silently during transfer. From what I understand, data generally doesn't get corrupted without notice on local transfers because a checksum is performed at some point by the drive or the OS. (This could be totally wrong.) Does the same thing happen with SMB, or is it up to the transferrer to check the integrity of their data? And if it doesn't happen with SMB, is there a protocol that does ensure data integrity? I know that rsync can checksum a transfer, but I'm on Windows and I already have a robocopy configuration that I like. Will my data be safe or do I have to use an external checksum tool to make sure?

    Read the article

  • Virtualbox VM (spawned by Vagrant) running but inaccessible. What now?

    - by Matt V.
    I have a Virtualbox VM running Ubuntu that was started by Vagrant. At some point my ssh session connected to the guest stopped responding. I tried "vagrant halt" from a terminal window on the host (OS X). The shutdown process seemed to also hang. Shutting down the Oracle VM VirtualBox Manager doesn't shut down the VMs themselves. Is there a way in either Vagrant or VirtualBox to force the running VM to shutdown? When running desktop guest OSes, closing the GUI window presents several options for shutting down the guest, but I don't know how to do the equivalent when the guest is running headless.

    Read the article

  • .htaccess redirect root directory and subpages with parameters

    - by wali
    I am having difficulty trying to redirect a root directory while at the same time redirect pages in a sub directory to a different URL. For example: http://test.example.com/olddir/sub/page.php?v=one to http://test.example.com/new/one while also redirecting the any request to the root of the olddir folder. I have tried RewriteCond %{QUERY_STRING} v=one RewriteRule ^/olddir/sub/page.php /new/? [R=301] and RedirectMatch /oldir "test.example.com" RedirectMatch /olddir/sub/page.php?v=one "test.example.com/new/one" Any help at this point will be extremely appreciated...Thanks!

    Read the article

  • If an old Exchange server is not part of a domain, does that imply that it can be safely removed without affecting mailflow?

    - by Bigbio2002
    We are doing some cleanup, and there is an old Exchange VM hanging around that we want to get rid of. We do not have the local admin credentials, but we can ascertain that it is not part of the current domain. Seeing as the new production Exchange server is working fine, is it safe to power off and remove the old server? *I should probably note that this is not an Edge Transport server. There was an upgrade to Exchange 2013 at some point in the past, and there is only one functioning Exchange server now.

    Read the article

  • When adding second processor to SQL Server, will it automatically balance the load?

    - by ddavis
    We have a SQL Server 2008 R2 (10.5) on a dedicated box with a single 2.4Ghz processor, which regularly runs at 70-80% CPU. We are going to be adding a significant number of users to the application and therefore want to add a second processor to the box (scale up). Will SQL Server automatically use the second processor to balance threads, or is there additional configuration that will need to be done? In other words, will adding the second processor drop my CPU usage to 35-40% per CPU, automatically balancing the load? Based on what I read here, it seems that it will: http://msdn.microsoft.com/en-us/library/ms181007.aspx However, I've read elsewhere that CPU performance gains can be made by assigning database tables to different filegroups, but I'm not sure we want to get that complicated at this point.

    Read the article

  • Server IP must be a LAN IP (Port Forwarding Netgear)

    - by rphello101
    I'm trying to set up a server (Apache) on my computer (fairly new to it). As I understand it, for it to be accessible to other computers, I need to forward port 80. When I try to forward the port though, I get the error: Server IP must be a LAN IP. I noticed in ipconfig that my default gateway is different than my wireless router. My computer is not hardwired, not on WiFi. Furthermore, I do not, at this point, have a static IP. I read that it should still work with a dynamic IP until it changes. Any ideas on what I can do?

    Read the article

  • Cannot boot: FGLRX 8.780 + Kernel 2.6.35-25

    - by pluc
    The situation before this all happened is pretty standard. I have a HP Pavillion dv5 laptop with an ATI Mobility Radeon 4200 series. It always worked fine with Ubuntu for as long as I can remember. However, at one point, something happened and truly made a majestic mess of things. It might've been extra repos I enabled with Ubuntu Tweak - I do not know. But something made it so that my system would not boot any longer. And when I say "won't boot", this is what I mean: - Durning a normal bootup, any entries (except Windows) selected with GRUB (or BURG, not even sure which one I'm using anymore) will spawn the Ubuntu loading screen - then try to start X (or GDM) 5 times. The screen goes to dark, black and back to the Ubuntu loading screen. Then it just stays there until I spawn another TTY. I have no idea what is happening or why. There are no errors in my logs, and I'm truly at a loss here. I've linked three files: Xorg.0.log, the output of dmesg and the GDM log: Xorg.0.log: http://ubuntu.pastebin.com/tpVKc2tc dmesg: ubuntu.pastebin.com/Nd5aYj45 gdm's :0.log: couldn't post due to lack of points :( Let me know if any of you more knowledgeable folks can restore some sanity in my life. Any help is greatly apreciated.

    Read the article

  • Less daunting front end for SQL Server

    - by Martin
    We currently have a few users who have been using Access very succesfully to throw around large amounts of data. We've now got to the point where the data is just too large to be held in Access, as well as wanting to hold it in a single place where multiple users can access it. We have therefore moved the data over to SQL Server. I want to provide a general tool that they can use to view the data on the server and do some simple things like run queries and filters and export the data for offline manipulation. I don't want the support headaches that might come with rolling out SQL Management Studio, and neither do I want to have to create an Access database with links for each current database or ones that are created in the future. Can anyone recommend a simple tool that will connect to a server, list all the databases and allow a user to drill into a table and look at the data. Many thanks.

    Read the article

  • Mapped network drive missing from My Computer and Explorer

    - by matt wilkie
    On a Windows XP Pro SP3 machine one network drive refuses to show up in My Computer or Explorer. The missing drive letter is G:, if that matters. Other mappings work fine. Other profiles one the same machine have no problem seeing G:. I can access the G: just fine typing it into the address bar or in CMD shell. I've used TweakUI to toggle hide/show G: with no difference. TweakUI says G: should be visible. I've logged off,on between toggles to make sure the settings are taking effect. I've looked at reg key [HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Policies\Explorer] and made sure it's zero'd.ref We've limped along with this broken setup for some time, just working around it, but some applications do not allow typing in a path when choosing a place to save files and it's reached the point where it's intolerable. So, anyone have any idea why XP won't show this drive letter? or how to fix it?

    Read the article

  • Game Asset Management

    - by user964123
    I am making my first small mobile game in C# XNA. Lets say I have 3 screens, the main menu, options and game screen. A single game session usually lasts for 1 min, so the user will alternate frequently between the main menu and game screen. Therefore, once I load the textures for either screen, I want to keep them in memory to avoid frequent reloading. Both screens share some assets like their background textures, but differ in others. The first solution I came up with is making 2 texture factory classes, MainScreenAssetFactory and GameScreenAssetFactory, each with their own content manager, and ill store them in a globally accessible point so that they persist after either screen is destroyed. There is also a OptionsScreenAssetFactory, but that I dont want to cache it since the options screen is rarely visited. A typical Factory would look something like this public class MainScreenAssetFactory { private readonly ContentManager contentManager; public MainScreenAssetFactory(IServiceProvider serviceProvider, string rootDirectory) { contentManager = new ContentManager(serviceProvider) { RootDirectory = rootDirectory }; } public Texture2D ListElementBackground { get { return return contentManager.Load<Texture2D>("UserTab"); } } public Texture2D ListElementBulletPoint { get { return return contentManager.Load<Texture2D>("TabIcon"); } } public Texture2D LoggedOutUser { get { return return contentManager.Load<Texture2D>("LoggedOutUser"); } } } Since both Main, Options and Game Screen share some common resources, instead of loading them more than once, I created another class CommonAssetTexFactory which holds the common stuff and stays in-memory during the app lifetime. For example, this class gets passed to the options screen when it is created. However, given my small game with its few assets, I am already finding this solution cumbersome and inflexible. Changing anything would require looking to see if its already in the common factory, and if not, modifying existing factories and so on. And this is just considering textures currently, i didnt add sound files yet. I cant imagine bigger games with thousands of resources using this approach. A better idea must exist. Would someone please enlighten me?

    Read the article

  • How to access MySQL on Windows

    - by Dan
    This may sound like a really dumb question, but I normally only deal with MSSQL, no LAMP stuff, so I'm struggling to figure out what's going on. I have Windows 7 and have installed MySQL 5.1 through Web Platform Installer. I have HeidiSQL installed to manage data in MySQL, but how do I connect? In Heidi it's asking for 'Hostname / IP' which is prepopulated with 127.0.0.1. It prepopulates the user field to 'root' (which is right) and I'm entering the password I chose when MySQL was installed. However, it just errors when I connect, saying: SQL Error (1045): Access denied for user 'root'@'localhost' (using password: YES). Can anyone point me in the right direction here? Many thanks...

    Read the article

  • Question about modeling with MVC (the pattern, not the MS stuff / non web)

    - by paul
    I'm working on an application in which I'm looking to employ the MVC pattern, but I've come up against a design decision point I could use some help with. My application is going to deal with the design of state-machines. Currently the MVC model holds information about the machine's states, inputs, outputs, etc. The view is going to show a diagram for the machine, graphically allowing the user to add new states, establish transitions, and put the states in a pleasing arrangement, among other things. I would like to store part of the diagram's state (e.g. the x and y state positions) when the machine information is stored for later retrieval, and am wondering how best to go about structuring the model(s?) for this. It seems like this UI information is more closely related to the view than to the state-machine model, so I was thinking that a secondary model might be in order, but I am reluctant to pursue this route because of the added complexity. Adding this information to the current model doesn't seem the right way to go about it either. This is the my first time using the MVC pattern so I'm still figuring things out. Any input would be appreciated.

    Read the article

  • Motivation for service layer (instead of just copying dlls)?

    - by BornToCode
    I'm creating an application which has 2 different UIs so I'm making it with a service layer which I understood is appropriate for such case. However I found myself just creating web methods for every single method I have in the BL layer, so the services basically built from methods that looks like this: return customers_bl.Get_Customer_Prices(customer_id); I understood that a main point of the service layer is to prevent duplication of code so I asked myself - well, why not just import the BL.dll (and the DAL.dll) to the other UI, and whenever making a change re-copy the dll files, it might not be so 'neat', but is the all purpose of the service layer to prevent this? {I know something is wrong in my approach, I'm probably missing the importance of service layer, I'd like to get more motivation to create another layer, especially because as it is I found that many of my BL functions ALREADY looks like: return customers_dal.Get_Customer_Prices(cust_id) which led me to ask: was it really necessary to create the BL just because on several functions I actually have LOGIC inside the BL?} so I'm looking for more motivation to creating ONE MORE layer, I'm sure it's not just to make it more convenient that I won't have to re-copy the dlls on changes? Am I grasping it wrong? Any simple guidelines on how to design service layer (corresponding to all the BL layer functions or not? any simple example?) any enlightenment on the subject?

    Read the article

  • IIS6 can't find site on local network

    - by chezy525
    I have a windows 2003 server with dual NICs running IIS6. I can access everything remotely, but the internal network can't seem to find the site regardless of which IP Address I try to go to. There are really several weird things that are happening here, but I'm going to limit this question to what I'm guessing to be the simplest problem (the solution to which I'm hoping solves other things as well): From the server itself, I can access the webpage using the primary IP address (i.e. http://192.168.1.2/index.htm), but not using the secondary IP address (i.e. http://10.10.10.2/index.htm). Self pinging both IP addresses works, and the "Web site identification" in IIS has the IP address set to "(All Unassigned)"... which I believe should bind both IP addresses to this site. I apologize if I'm not providing enough details about my setup, but at this point I don't even know what's relevant...

    Read the article

  • Project Time Tracker

    - by Geertjan
    Based on yesterday's blog entry, let's do something semi useful and display, in the project popup, which is available when you right-click a project in the Projects window, the time since the last change was made anywhere in the project, i.e., we can listen recursively to any changes done within a project and then update the popup with the newly acquired information, dynamically: import java.awt.event.ActionEvent; import java.text.DateFormat; import java.text.SimpleDateFormat; import java.util.ArrayList; import java.util.Collection; import java.util.List; import javax.swing.AbstractAction; import org.netbeans.api.project.Project; import org.netbeans.api.project.ProjectUtils; import org.openide.awt.ActionID; import org.openide.awt.ActionReference; import org.openide.awt.ActionRegistration; import org.openide.awt.StatusDisplayer; import org.openide.filesystems.FileAttributeEvent; import org.openide.filesystems.FileChangeListener; import org.openide.filesystems.FileEvent; import org.openide.filesystems.FileRenameEvent; import org.openide.util.Lookup; import org.openide.util.LookupEvent; import org.openide.util.LookupListener; import org.openide.util.Utilities; import org.openide.util.WeakListeners; @ActionID( category = "Demo", id = "org.ptt.TrackProjectSelectionAction") @ActionRegistration( lazy = false, displayName = "NOT-USED") @ActionReference( path = "Projects/Actions", position = 0) public final class TrackProjectSelectionAction extends AbstractAction implements LookupListener, FileChangeListener { private Lookup.Result<Project> projects; private Project context; private Long startTime; private Long changedTime; private DateFormat formatter; private List<Project> timedProjects; public TrackProjectSelectionAction() { putValue("popupText", "Timer"); formatter = new SimpleDateFormat("HH:mm:ss"); timedProjects = new ArrayList<Project>(); projects = Utilities.actionsGlobalContext().lookupResult(Project.class); projects.addLookupListener( WeakListeners.create(LookupListener.class, this, projects)); resultChanged(new LookupEvent(projects)); } @Override public void resultChanged(LookupEvent le) { Collection<? extends Project> allProjects = projects.allInstances(); if (allProjects.size() == 1) { Project currentProject = allProjects.iterator().next(); if (!timedProjects.contains(currentProject)) { String currentProjectName = ProjectUtils.getInformation(currentProject).getDisplayName(); putValue("popupText", "Start Timer for Project: " + currentProjectName); StatusDisplayer.getDefault().setStatusText( "Current Project: " + currentProjectName); timedProjects.add(currentProject); context = currentProject; } } } @Override public void actionPerformed(ActionEvent e) { refresh(); } protected void refresh() { startTime = System.currentTimeMillis(); String formattedStartTime = formatter.format(startTime); putValue("popupText", "Timer started: " + formattedStartTime + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); } @Override public void fileChanged(FileEvent fe) { changedTime = System.currentTimeMillis(); formatter = new SimpleDateFormat("mm:ss"); String formattedLapse = formatter.format(changedTime - startTime); putValue("popupText", "Time since last change: " + formattedLapse + " (" + ProjectUtils.getInformation(context).getDisplayName() + ")"); startTime = changedTime; } @Override public void fileFolderCreated(FileEvent fe) {} @Override public void fileDataCreated(FileEvent fe) {} @Override public void fileDeleted(FileEvent fe) {} @Override public void fileRenamed(FileRenameEvent fre) {} @Override public void fileAttributeChanged(FileAttributeEvent fae) {} } Some more work needs to be done to complete the above, i.e., for each project you somehow need to maintain the start time and last change and redisplay that whenever the user right-clicks the project.

    Read the article

< Previous Page | 602 603 604 605 606 607 608 609 610 611 612 613  | Next Page >