Search Results

Search found 12437 results on 498 pages for 'normal mapping'.

Page 295/498 | < Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >

  • Graphics performance of 945GME

    - by l0b0
    Edit: Since setting Appearance - Visual Effects up to a stunning "Normal", I now get ~35 FPS in glxgears right after login, with nothing else running :( I'm getting terrible graphics performance in NeverWinter Nights (native with SoU+HotU+CEP2) on my Eee PC 1005HAB. Even with all graphics settings (including the "advanced" ones) at minimum I get about 2-10 FPS, depending on the scene. Firefox is really sluggish as well - Changing tabs often takes a second, scrolling is laggy, and typing this I notice the delay between pressing keys and seeing the text on screen. The rest of the OS is running OK, although general performance seems to be even worse than my old Eee PC 900. glxgears gives about 60 FPS, which is apparently as it should be (synchronized with the monitor refresh rate). Bugs like Launchpad #252094 and the instructions for Reverting the Jaunty Xorg intel driver to 2.4 are old enough that I'm afraid following the instructions would render the system unusable. Are there any tips for improving graphics performance on this system that are still relevant for 10.10? $ uname -a Linux l0b0eee 2.6.35-28-generic #49-Ubuntu SMP Tue Mar 1 14:40:58 UTC 2011 i686 GNU/Linux $ lspci -nn | grep VGA 00:02.0 VGA compatible controller [0300]: Intel Corporation Mobile 945GME Express Integrated Graphics Controller [8086:27ae] (rev 03) $ glxinfo name of display: :0.0 display: :0 screen: 0 direct rendering: Yes server glx vendor string: SGI server glx version string: 1.4 ...

    Read the article

  • Join us for Live Oracle VM and Oracle Linux Cloud Events in Europe

    - by Monica Kumar
    Join us for a series of live events and discover how Oracle VM and Oracle Linux offer an integrated and optimized infrastructure for quickly deploying a private cloud environment at lower cost. As one of the most widely deployed operating systems today, Oracle Linux delivers higher performance, better reliability, and stability, at a lower cost for your cloud environments. Oracle VM is an application-driven server virtualization solution fully integrated and certified with Oracle applications to deliver rapid application deployment and simplified management. With Oracle VM, you have peace of mind that the entire Oracle stack deployed is fully certified by Oracle. Register now for any of the upcoming events, and meet with Oracle experts to discuss how we can help in enabling your private cloud. Nov 20: Foundation for the Cloud: Oracle Linux and Oracle VM (Belgium) Nov 21: Oracle Linux & Oracle VM Enabling Private Cloud (Germany) Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Nov 28: Realize Substantial Savings and Increased Efficiency with Oracle Linux and Oracle VM (Luxembourg) Nov 29: Foundation for the Cloud: Oracle Linux and Oracle VM (Netherlands)Dec 5: MySQL Tech Tour, including Oracle Linux and Oracle VM (France) Hope to see you at one of these events!

    Read the article

  • Spinup time failure

    - by bioShark
    I am not sure this is a real question or a bug I should report Ubuntu. Using: Ubuntu 11.10, on a Intel Q6600, Samsung Spinpoint F4 2TB. I have set my PC on Suspend and after I came back, pressed Enter and after logging in everything was back to normal. However, I had a message from Disk Utility that one disk reports errors. I entered Disk Utility, and my Samsung 2TB disk, the one on which my Ubuntu is installed, had the SMART Status turned red, with error message on it. The error was: Spinup time failed Value 21, Threshold value was 25 (so the error was reported because 21 < 25) I restarted and booted up in Windows to see what HD Tune is reporting. Unfortunately it was exactly the same 21/25. After reading up on Wiki about SMART and the errors, I discovered that Spinup time is the time required for the disk to reach full spinning speed in milliseconds. Then it hit me that, in Ubuntu I had Suspended the system, making essentially all my hardware stop. And when I rebooted to Windows, the hardware doesn't really stop, so SMART's reading of the Spinup time was still from Ubuntu's suspension. So I did a full PC stop and then booted up again, both in Ubuntu and Windows to see if there are different readings. Both reported successful Spinup time, 68 (a little better then 21 :) ), although in Disk Utility I have a nice message: Failed in the Past So now I am pretty sure that Ubuntu didn't handle the Suspend correctly, but then again should I worry about Imminent hardware failure ? Am I missing some drivers? Should I report this as a bug to Ubuntu? Sorry if this was a bad place to ask this question.

    Read the article

  • Transfer .com domain to GoDaddy - websites running on same domain - 3 weeks left until expiration, 2 days left web hosting

    - by Eric Nguyen
    Our company purchased this abc.com domain from a local registrar. The domain will expire in about 3 weeks. We have our main websites running on this abc.com domain and they cannot be down for too long. The web hosting service will end in 2 days. Our websites are already hosted and they are up and running on Amazon EC2. We would like to transfer the domain to GoDaddy now or as soon as possible. (since we have many other domains there and we belive GoDaddy will be better in long-term considering the prices and the features it offers) There are many questions on the decision to transfer the domain to GoDaddy: 1) Cost and time required to move out of our local registrar? This is currently unknown as I'm still trying to retrieve the agreement we have with them 2) How does the 3 week time left until expiration of the domain matters here? Should we wait until the domain expires and then purchase in through GoDaddy? How long would such process take as I suppose our websites will be down during that time? Any other drawbacks? 3) What can I do to ensure our websites will continue functioning regardless of the domain transfer process? It seems the actual registrar here is enom.com and the local registrar here just partners with it I suppose I should then park the abc.com domain with enom.com and make changes to DNS settings so that our websites can continue to be hosted on EC2 as normal. How long does it normally take the domain to be transferred to GoDaddy completely? Is it even possible at all to keep our websites are up and running during the whole domain transfer process? Apologies that I'm throwing many questions at the same time here. It's rather last minutes and I suddenly realised there are too many unknown risks.

    Read the article

  • Burned CD-R are not identical to the input iso image, why?

    - by Grumbel
    I have the issue that sometimes when I burn an iso image to a CD-R with: sudo wodim -v driveropts=burnfree -data dev=/dev/scd0 input.iso And then read it back out again with: sudo dd if=/dev/cdrom of=output.iso dd: reading `/dev/cdrom': Input/output error ... That I end up with two iso images that are not identical, namely the output.iso is missing 2048 bytes at the end. When I however mount the iso image or CD-R and compare the actual files on the mountpoint, both are identical. Is that expected behavior or is that an actually incorrect burn of the data? And if its expected, how can I verify that the burn process was successful? The reason why I ask in the first place is that it seems to be reproducible behavior, certain iso images come out 2048 bytes short, even on repeated burns, but all burned CD-Rs are under themselves identical. Also what is the reason behind the: dd: reading `/dev/cdrom': Input/output error As it happens always, I assume it is normal, but what is the technical reason behind it? I assume CDs don't allow the device to detect the size directly, so dd reads till it encounters the end the hard way. Edit: User karol on superusers.com mentioned that both the size issue and the read error are the result of using -tao (default) in wodim instead of -dao mode. I couldn't yet test it, but it sounds like the most plausible explanation so far.

    Read the article

  • How to protect UI components using OPSS Resource Permissions

    - by frank.nimphius
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} table.MsoTableGrid {mso-style-name:"Table Grid"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-priority:59; mso-style-unhide:no; border:solid black 1.0pt; mso-border-alt:solid black .5pt; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-border-insideh:.5pt solid black; mso-border-insidev:.5pt solid black; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} ADF security protects ADF bound pages, bounded task flows and ADF Business Components entities with framework specific JAAS permissions classes (RegionPermission, TaskFlowPermission and EntityPermission). If used in combination with the ADF security expression language and security checks performed in Java, this protection already provides you with fine grained access control that can also be used to secure UI components like buttons and input text field. For example, the EL shown below disables the user profile panel tabs for unauthenticated users: <af:panelTabbed id="pt1" position="above">   ...   <af:showDetailItem        text="User Profile" id="sdi2"                                       disabled="#{!securityContext.authenticated}">   </af:showDetailItem>   ... </af:panelTabbed> The next example disables a panel tab item if the authenticated user is not granted access to the bounded task flow exposed in a region on this tab: <af:panelTabbed id="pt1" position="above">   ...   <af:showDetailItem text="Employees Overview" id="sdi4"                        disabled="#{!securityContext.taskflowViewable         ['/WEB-INF/EmployeeUpdateFlow.xml#EmployeeUpdateFlow']}">   </af:showDetailItem>   ... </af:panelTabbed> Security expressions like shown above allow developers to check the user permission, authentication and role membership status before showing UI components. Similar, using Java, developers can use code like shown below to verify the user authentication status: ADFContext adfContext = ADFContext.getCurrent(); SecurityContext securityCtx = adfContext.getSecurityContext(); boolean userAuthenticated = securityCtx.isAuthenticated(); Note that the Java code lines use the same security context reference that is used with expression language. But is this all that there is? No ! The goal of ADF Security is to enable all ADF developers to build secure web application with JAAS (Java Authentication and Authorization Service). For this, more fine grained protection can be defined using the ResourcePermission, a generic JAAS permission class owned by the Oracle Platform Security Services (OPSS).  Using the ResourcePermission  class, developers can grant permission to functional parts of an application that are not protected by page or task flow security. For example, an application menu allows creating and canceling product shipments to customers. However, only a specific user group - or application role, which is the better way to use ADF Security - is allowed to cancel a shipment. To enforce this rule, a permission is needed that can be used declaratively on the UI to hide a menu entry and programmatically in Java to check the user permission before the action is performed. Note that multiple lines of defense are what you should implement in your application development. Don't just rely on UI protection through hidden or disabled command options. To create menu protection permission for an ADF Security enable application, you choose Application | Secure | Resource Grants from the Oracle JDeveloper menu. The opened editor shows a visual representation of the jazn-data.xml file that is used at design time to define security policies and user identities for testing. An option in the Resource Grants section is to create a new Resource Type. A list of pre-defined types exists for you to create policy definitions for. Many of these pre-defined types use the ResourcePermission class. To create a custom Resource Type, for example to protect application menu functions, you click the green plus icon next to the Resource Type select list. The Create Resource Type editor that opens allows you to add a name for the resource type, a display name that is shown when granting resource permissions and a description. The ResourcePermission class name is already set. In the menu protection sample, you add the following information: Name: MenuProtection Display Name: Menu Protection Description: Permission to grant menu item permissions OK the dialog to close the resource permission creation. To create a resource policy that can be used to check user permissions at runtime, click the green plus icon in the Resources section of the Resource Grants section. In the Create Resource dialog, provide a name for the menu option you want to protect. To protect the cancel shipment menu option, create a resource with the following settings Resource Type: Menu Protection Name: Cancel Shipment Display Name: Cancel Shipment Description: Grant allows user to cancel customer good shipment   A new resource Cancel Shipmentis added to the Resources panel. Initially the resource is not granted to any user, enterprise or application role. To grant the resource, click the green plus icon in the Granted To section, select the Add Application Role option and choose one or more application roles in the opened dialog. Finally, you click the process action to define the policy. Note that permission can have multiple actions that you can grant individually to users and roles. The cancel shipment permission for example could have another action "view" defined to determine which user should see that this option exist and which users don't. To use the cancel shipment permission, select the disabled property on a command item, like af:commandMenuItem and click the arrow icon on the right. From the context menu, choose the Expression Builder entry. Expand the ADF Bindings | securityContext node and click the userGrantedResource option. Hint: You can expand the Description panel below the EL selection panel to see an example of how the grant should look like. The EL that is created needs to be manually edited to show as #{!securityContext.userGrantedResource[               'resourceName=Cancel Shipment;resourceType=MenuProtection;action=process']} OK the dialog so the permission checking EL is added as a value to the disabled property. Running the application and expanding the Shipment menu shows the Cancel Shipments menu item disabled for all users that don't have the custom menu protection resource permission granted. Note: Following the steps listed above, you create a JAAS permission and declaratively configure it for function security in an ADF application. Do you need to understand JAAS for this? No!  This is one of the benefits that you gain from using the ADF development framework. To implement multi lines of defense for your application, the action performed when clicking the enabled "Cancel Shipments" option should also check if the authenticated user is allowed to use process it. For this, code as shown below can be used in a managed bean public void onCancelShipment(ActionEvent actionEvent) {       SecurityContext securityCtx =       ADFContext.getCurrent().getSecurityContext();   //create instance of ResourcePermission(String type, String name,   //String action)   ResourcePermission resourcePermission =     new ResourcePermission("MenuProtection","Cancel Shipment",                            "process");        boolean userHasPermission =          securityCtx.hasPermission(resourcePermission);   if (userHasPermission){       //execute privileged logic here   } } Note: To learn more abput ADF Security, visit http://download.oracle.com/docs/cd/E17904_01/web.1111/b31974/adding_security.htm#BGBGJEAHNote: A monthly summary of OTN Harvest blog postings can be downloaded from ADF Code Corner. The monthly summary is a PDF document that contains supporting screen shots for some of the postings: http://www.oracle.com/technetwork/developer-tools/adf/learnmore/index-101235.html

    Read the article

  • Sponsor the Hottest .NET Community Event in Germany: dotnet Cologne 2011

    - by WeigeltRo
    The “dotnet Cologne” conference organized by the NET usergroups Bonn and Cologne quickly has become the .NET community event in Germany. So when we opened the registration for dotnet Cologne 2011 on Monday, we expected some interest. But we didn’t expect the 200 “early bird” seats to be gone in less than three hours! And the registrations at normal price keep coming in, so it looks like this event will sell out even earlier than last year. In December I wrote about sponsorship opportunities at the dotnet Cologne 2011 – and why it’s a good idea to be a sponsor at this particular conference. If you are interested in becoming a sponsor: We still offer a wide variety of sponsorship packages in different sizes. At our new, larger, event location, we still have space for exhibition booths. Last year’s exhibitors were very happy and had many interesting conversations with the attendees. And this year we planned for longer breaks between sessions, which means event more time for presenting your products. And yes, German developers understand English demos. But maybe a booth is a bit too much for you. With the Bronze package, you can make sure the attendees receive promotional material of your company in their bags – for a fraction of what you’d pay at a commercial conference. Or you could sponsor a couple of licenses of your product for the raffle at the end of the day. If you want to learn more, just send an email to Roland.Weigelt at dotnet-koelnbonn.de and I’ll send you our sponsor information.

    Read the article

  • different data with same title and keywords

    - by Junaid Saeed
    here is my scenario i have a website where i redirect my users basing upon the device they were using, lets say a user is visiting from an iPad, i take him directly to the page of iPad wallpapers, the user selects iPad version & i take the user to the gallery of wallpapers where the user can select & download any wallpaper. Every wallpaper is the required resolution, i have my reasons for doing this, now the thing is there are diff. resolution. versions of an image appearing one 5 diff. sections of my website, each having their own view page Now there is only one record in db.table for the image, and basing on the my consistent naming convention of the images, i pick the required image. this means when 5 different pages are generated in 5 categorized sections of the website, due to a shared DB record, the keywords, the titles and every single detail of the 5 pages is same besides the resolution of the image, and the section specific details that the page has and yeah the pages also have different paths like wallpapers.com\ipad-1\cars\Ferrari-dino.html wallpapers.com\ipad-2\cars\Ferrari-dino.html wallpapers.com\ipad-3\cars\Ferrari-dino.html wallpapers.com\ipad-4\cars\Ferrari-dino.html wallpapers.com\ipad-5\cars\Ferrari-dino.html now this is my scenario, How do Search Engines see it and how do they rank it? Is it a Good or Normal or Bad SEO practice? If bad how dangerous it is for my sites SEO? i need your comments on my scenario.

    Read the article

  • Advanced reporting in Oracle Service Bus

    - by [email protected]
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 21 false false false FR-BE X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Tableau Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Reporting in OSB is useful, it allows you to audit message going through OSB. The service bus console allows you to view the content that you reported. To report data you simply use the Report action in your proxy. The action itself is rather straightforward. You specify the content to report ($body for example), an optional key for easier search (for example the id of the record) and that's it. Sometimes though, what you want to is a bit more complicated. I recently had a case where the key was built from the message type (XML) and the id of the message. Seems quite simple but the id could be any element anywhere in the message depending on its type. This could be handled by 'if' statement but adding new cases would mean changing the proxy service and if you have lots of message types this can get boring so I wanted the solution to be as dynamic as possible (read "just change a configuration file and that's it"). The following entry details how you can make this dynamic in your proxy by using XQuery/XSLT.   First step the XQuery We're going to use an XQuery to make the mapping between the XML message type and the location of the identifier in it. We assume here that the message type is the first node of the input XML and use a rather simple Xpath to find the identifier.  The XQuery looks like this for two messages : <reportmapping>                 <row>                                <logical>messageType1</logical>                                <type>MT1</type>                                <reportingreferencelocation>//customID</reportingreferencelocation>                 </row>                 <row>                                <logical>messageType2</logical>                                <type>MT2</type>                                <reportingreferencelocation>//theOtherIDLocation</reportingreferencelocation>                 </row>   </reportmapping>   Second step the XSLT To get the identifier value of the dynamic path, we're going to use an XSLT transformation. This XSLT takes an XML parameter as input which contains our xpath (coming from the previous XQuery). The XSLT looks like this : <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xalan="http://xml.apache.org/xalan">               <xsl:param name="PathToNode"/>               <xsl:template match="/">                             <IDVALUE>                                           <xsl:value-of select="xalan:evaluate($PathToNode/reportingreferencelocation)"/>                             </ IDVALUE >               </xsl:template> </xsl:stylesheet> (note the use of a xalan function here. Xalan is the XSLT processor used in weblogic server)   Last step, the proxy service We're now going to wire everything in the proxy service. First we assign the XQuery to a variable. We then get the entry in the XQuery corresponding to the record we're treating. We're then extracting the id of the message using the XSLT transformation Final assign is to built the final variable that will be used as the reporting key. The report action is then called with this variable. Everything is setup. We're now ready to test.   Testing the solution Using the test console, we're sending our first XML ... <messageType1>                 <sender>test console 1</sender>                 <customID>ID12345</customID >                 <content>                                 <field1>value of field 1</field1>                 </content> </messageType1>   ... and a second one of another supported type <messageType2>                 <header>                                 <theOtherIDLocation >ID67890</theOtherIDLocation >                 </header> <body>                                <data>Test data</data>                 </body> </messageType2>   Reporting result is :  Conclusion Report is done as expected. Now if a new message type must be supported we only have to modify the XQuery and nothing at the proxy service level.   Sample project attached to this entry.sbconfig-dynamicReport.jar  

    Read the article

  • Constant game speed independent of variable FPS in OpenGL with GLUT?

    - by Nazgulled
    I've been reading Koen Witters detailed article about different game loop solutions but I'm having some problems implementing the last one with GLUT, which is the recommended one. After reading a couple of articles, tutorials and code from other people on how to achieve a constant game speed, I think that what I currently have implemented (I'll post the code below) is what Koen Witters called Game Speed dependent on Variable FPS, the second on his article. First, through my searching experience, there's a couple of people that probably have the knowledge to help out on this but don't know what GLUT is and I'm going to try and explain (feel free to correct me) the relevant functions for my problem of this OpenGL toolkit. Skip this section if you know what GLUT is and how to play with it. GLUT Toolkit: GLUT is an OpenGL toolkit and helps with common tasks in OpenGL. The glutDisplayFunc(renderScene) takes a pointer to a renderScene() function callback, which will be responsible for rendering everything. The renderScene() function will only be called once after the callback registration. The glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0) takes the number of milliseconds to pass before calling the callback processAnimationTimer(). The last argument is just a value to pass to the timer callback. The processAnimationTimer() will not be called each TIMER_MILLISECONDS but just once. The glutPostRedisplay() function requests GLUT to render a new frame so we need call this every time we change something in the scene. The glutIdleFunc(renderScene) could be used to register a callback to renderScene() (this does not make glutDisplayFunc() irrelevant) but this function should be avoided because the idle callback is continuously called when events are not being received, increasing the CPU load. The glutGet(GLUT_ELAPSED_TIME) function returns the number of milliseconds since glutInit was called (or first call to glutGet(GLUT_ELAPSED_TIME)). That's the timer we have with GLUT. I know there are better alternatives for high resolution timers, but let's keep with this one for now. I think this is enough information on how GLUT renders frames so people that didn't know about it could also pitch in this question to try and help if they fell like it. Current Implementation: Now, I'm not sure I have correctly implemented the second solution proposed by Koen, Game Speed dependent on Variable FPS. The relevant code for that goes like this: #define TICKS_PER_SECOND 30 #define MOVEMENT_SPEED 2.0f const int TIMER_MILLISECONDS = 1000 / TICKS_PER_SECOND; int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void processAnimationTimer(int value) { // setups the timer to be called again glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Requests to render a new frame (this will call my renderScene() once) glutPostRedisplay(); } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) // Setup the timer to be called one first time glutTimerFunc(TIMER_MILLISECONDS, processAnimationTimer, 0); // Read the current time since glutInit was called currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } This implementation doesn't fell right. It works in the sense that helps the game speed to be constant dependent on the FPS. So that moving from point A to point B takes the same time no matter the high/low framerate. However, I believe I'm limiting the game framerate with this approach. Each frame will only be rendered when the time callback is called, that means the framerate will be roughly around TICKS_PER_SECOND frames per second. This doesn't feel right, you shouldn't limit your powerful hardware, it's wrong. It's my understanding though, that I still need to calculate the elapsedTime. Just because I'm telling GLUT to call the timer callback every TIMER_MILLISECONDS, it doesn't mean it will always do that on time. I'm not sure how can I fix this and to be completely honest, I have no idea what is the game loop in GLUT, you know, the while( game_is_running ) loop in Koen's article. But it's my understanding that GLUT is event-driven and that game loop starts when I call glutMainLoop() (which never returns), yes? I thought I could register an idle callback with glutIdleFunc() and use that as replacement of glutTimerFunc(), only rendering when necessary (instead of all the time as usual) but when I tested this with an empty callback (like void gameLoop() {}) and it was basically doing nothing, only a black screen, the CPU spiked to 25% and remained there until I killed the game and it went back to normal. So I don't think that's the path to follow. Using glutTimerFunc() is definitely not a good approach to perform all movements/animations based on that, as I'm limiting my game to a constant FPS, not cool. Or maybe I'm using it wrong and my implementation is not right? How exactly can I have a constant game speed with variable FPS? More exactly, how do I correctly implement Koen's Constant Game Speed with Maximum FPS solution (the fourth one on his article) with GLUT? Maybe this is not possible at all with GLUT? If not, what are my alternatives? What is the best approach to this problem (constant game speed) with GLUT? I originally posted this question on Stack Overflow before being pointed out about this site. The following is a different approach I tried after creating the question in SO, so I'm posting it here too. Another Approach: I've been experimenting and here's what I was able to achieve now. Instead of calculating the elapsed time on a timed function (which limits my game's framerate) I'm now doing it in renderScene(). Whenever changes to the scene happen I call glutPostRedisplay() (ie: camera moving, some object animation, etc...) which will make a call to renderScene(). I can use the elapsed time in this function to move my camera for instance. My code has now turned into this: int previousTime; int currentTime; int elapsedTime; void renderScene(void) { (...) // Setup the camera position and looking point SceneCamera.LookAt(); // Do all drawing below... (...) } void renderScene(void) { (...) // Get the time when the previous frame was rendered previousTime = currentTime; // Get the current time (in milliseconds) and calculate the elapsed time currentTime = glutGet(GLUT_ELAPSED_TIME); elapsedTime = currentTime - previousTime; /* Multiply the camera direction vector by constant speed then by the elapsed time (in seconds) and then move the camera */ SceneCamera.Move(cameraDirection * MOVEMENT_SPEED * (elapsedTime / 1000.0f)); // Setup the camera position and looking point SceneCamera.LookAt(); // All drawing code goes inside this function drawCompleteScene(); glutSwapBuffers(); /* Redraw the frame ONLY if the user is moving the camera (similar code will be needed to redraw the frame for other events) */ if(!IsTupleEmpty(cameraDirection)) { glutPostRedisplay(); } } void main(int argc, char **argv) { glutInit(&argc, argv); (...) glutDisplayFunc(renderScene); (...) currentTime = glutGet(GLUT_ELAPSED_TIME); glutMainLoop(); } Conclusion, it's working, or so it seems. If I don't move the camera, the CPU usage is low, nothing is being rendered (for testing purposes I only have a grid extending for 4000.0f, while zFar is set to 1000.0f). When I start moving the camera the scene starts redrawing itself. If I keep pressing the move keys, the CPU usage will increase; this is normal behavior. It drops back when I stop moving. Unless I'm missing something, it seems like a good approach for now. I did find this interesting article on iDevGames and this implementation is probably affected by the problem described on that article. What's your thoughts on that? Please note that I'm just doing this for fun, I have no intentions of creating some game to distribute or something like that, not in the near future at least. If I did, I would probably go with something else besides GLUT. But since I'm using GLUT, and other than the problem described on iDevGames, do you think this latest implementation is sufficient for GLUT? The only real issue I can think of right now is that I'll need to keep calling glutPostRedisplay() every time the scene changes something and keep calling it until there's nothing new to redraw. A little complexity added to the code for a better cause, I think. What do you think?

    Read the article

  • VLC package dependencies broken & cannot be resolved

    - by reep0n
    I'm using Ubuntu 12.04 32bit. Today, suddenly some updates had arrived but it wasn't normal update. It said me to partial update. When I have done this, update manager want to remove vlc. I did it. But when I want to install vlc from synaptic package manager, I found broken dependencies there. Now, If I want to install vlc from Software Centre, It pops up: The following packages have unmet dependencies: vlc: Depends: fonts-freefont-ttf but it is not going to be installed Depends: vlc-nox (= 2.0.3+git20121005+r392-0~r42~precise1) but 2.0.3+git20121005+r392-0~r42~precise1 is to be installed Depends: libavcodec-extra-53 (= 4:0.8-1~) but 4:0.8.3ubuntu0.12.04.1 is to be installed Depends: libavutil-extra-51 (= 4:0.8-1~) but 4:0.8.3ubuntu0.12.04.1 is to be installed Depends: libc6 (= 2.15) but 2.15-0ubuntu10.2 is to be installed Depends: libfreetype6 (= 2.2.1) but 2.4.8-1ubuntu2 is to be installed Depends: libgcc1 (= 1:4.1.1) but 1:4.6.3-1ubuntu5 is to be installed Depends: libqtcore4 (= 4:4.8.0) but 4:4.8.2+dfsg-2ubuntu1~precise1~ppa1 is to be installed Depends: libqtgui4 (= 4:4.7.0~beta1) but 4:4.8.2+dfsg-2ubuntu1~precise1~ppa1 is to be installed Depends: libstdc++6 (= 4.6) but 4.6.3-1ubuntu5 is to be installed Depends: zlib1g (= 1:1.2.3.3.dfsg) but 1:1.2.3.4.dfsg-3ubuntu4 is to be installed Hey, Now what to do? How to resolve this problem?

    Read the article

  • Timeout Considerations for Solicit Response

    - by Michael Stephenson
    Background One of the clients I work with had been experiencing some issues for a while surrounding web service timeouts.  It's been a little challenging to work through the problems due to limitations in the diagnostic information available from one of the applications, but I learned some interesting things while troubleshooting the problem which don't seem to have been discussed much in the community so I thought I'd share my findings. In the scenario we have BizTalk trying to make calls to a .net web service which was exposed as a WSE 2 endpoint.  In the process BizTalk will try to make a large number of concurrent web service calls to the application, and the backend application has more than enough infrastructure and capability to handle the load. We have configured the <ConnectionManagement> section of the BizTalk configuration file to support up to 100 concurrent connections from each of our 2 BizTalk send servers to the web servers of the application. The problem we were facing was that the BizTalk side was reporting a significant number of timeouts when calling the web service.   One of the biggest issues was the challenge of being able to correlate a message from BizTalk to the IIS log in the .net application and the custom logs in the application especially when there was a fairly large number of servers hosting the web services.  However the key moment came when we were able to identify a specific call which had taken 40 seconds to execute on the server (yes a long time I know but that's a different story!).  Anyway we were able to identify that this had timed out on the BizTalk side.  Based on the normal 2 minute timeout we knew something unexpected was going on. From here I decided to do some experimentation and I wanted to start outside of BizTalk because my hunch was this was not a BizTalk behaviour but something which was being highlighted by BizTalk because of our large load.     Server-side - Sample Web Service To begin with I created a sample web service.  Nothing special just a vanilla asmx web service hosted in IIS6 on Windows 2003 Standard Edition.  The web service is just a hello world style web service as shown in the below picture.  The only key feature is that the server side web method has a 30 second sleep in it and will trace out some information before and after the thread is set to sleep.      In the configuration for this web service there again is nothing special it's pretty much the most plain simple web service you could build. Client-Side To begin looking at what was happening with our example I created a number of different ways to consume the web service. SoapHttpClientProtocol Example I created a small application which would use a normal proxy generated to call the web service.  It would iterate around a loop and make calls using the begin/end methods so I can do this asynchronously.  I would do a loop of 20 calls with the ConnectionManager configuration section supporting only 5 concurrent connections to the server.     <connectionManagement> <remove address="*"/> <add address = "*" maxconnection = "12" /> <add address = "http://<ServerName>" maxconnection = "5" />                         </connectionManagement> </system.net>     The below picture shows an example of the service calling code, key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service         The Test I would run the client and execute 21 calls to the web service.   The Results  Below is the client side trace showing what's happening on the client. In the below diagram is the web service side trace showing what's happening on the server Some observations on the results are: All of the calls were successful from the clients perspective You could see the next call starting on the server as soon as the previous one had completed Calls took significantly longer than 40 seconds from the start of our call to the return. In fact call 20 took 2 minutes and 30 seconds from the perspective of my code to execute even though I had set the timeout to 40 seconds     WSE 2 Sample In the second example I used the exact same code to call the web service again with a single exception that I modified the web service proxy to derive from WebServiceClient protocol which is part of WSE 2 (using SP3).  The below picture shows the basic code and the key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service        The Test This test would execute 21 calls from the client to the web service.   The Results  The below trace is from the client side: The below trace is from the server side:   Some observations on the trace results for this scenario are: With call 4 if you look at the server side trace it did not start executing on the server for a number of seconds after the other 4 initial calls which were accepted by the server. I re-ran the test and this happened a couple of times and not on most others so at this point I'm just putting this down to something unexpected happening on the development machine and we will leave this observation out of scope of this article. You can see that the client side trace statement executed almost immediately in all cases All calls after the initial few calls would timeout On the client side the calls that did timeout; timed out in a longer duration than the 40 seconds we set as the timeout You can see that as calls were completing on the server the next calls were starting to come through The calls that timed out on the client did actually connect to the server and their server side execution completed successfully     Elaboration on the findings Based on the above observations I have drawn the below sequence diagram to illustrate conceptually what is happening.  Everything except the final web service object is on the client side of the call. In the diagram below I've put two notes on the Web Service Proxy to show the two different places where the different base classes seem to start their timeout counters. From the earlier samples we can work out that the timeout counter for the WSE web service proxy starts before the one for the SoapHttpClientProtocol proxy and the WSE one includes the time to get a connection from the pool; whereas the Soap proxy timeout just covers the method execution. One interesting observation is if we rerun the above sample and increase the number of calls from 21 to 100,000 then for the WSE sample we will see a similar pattern where everything after the first few calls will timeout on the client as soon as it makes a connection to the server whereas the soap proxy will happily plug away and process all of the calls without a single timeout. I have actually set the sample running overnight and this did happen. At this point you are probably thinking the same thoughts I was at the time about the differences in behaviour and which is right and why are they different? I'm not sure there is a definitive answer to this in the documentation, or at least not that I could find! I think you just have to consider that they are different and they could have different effects depending on your messaging solution. In lots of situations this is just not an issue as your concurrent requests doesn't get to the situation where you end up throttling the web service calls on the client side, however this is definitely more common with an integration broker such as BizTalk where you often have high throughput requirements.  Some of the considerations you should make Based on this behaviour you should be aware of the following: In a .net application if you are making lots of concurrent web service calls from an application in an asynchronous manner your user may thing they are experiencing poor performance but you think your web service is working well. The problem could be that the client will have a default of 2 connections to remote servers so you should bear this in mind When you are developing a BizTalk solution or a .net solution with the WSE 2 stack you may experience timeouts under load and throttling the number of connections using the max connections element in the configuration file will not help you For an application using WSE2 or SoapHttpClientProtocol an expired timeout will not throw an error until after a connection to the server has been made so you should consider this in your transaction and durability patterns     Our Work Around In the short term for our specific scenario we know that we can handle this by just increasing our timeout value.  There is only a specific small window when we get lots of concurrent traffic that causes this scenario so we should be able to increase the timeout to take into consideration the additional client side wait, and on the odd occasion where we do get a timeout the BizTalk send port retry will handle this. What was causing our original problem was that for that short window we were getting a lot of retries which significantly increased the load on our send servers and highlighted the issue.  Longer Term Solution As a longer term solution this really gives us more ammunition to argue a migration to WCF. The application we are calling has some factors which limit the protocols we can use but with WCF we would have more control on the various timeout options because in WCF you can configure specific parts of the timeout. Summary I've had this blog post on my to do list for ages but hopefully it will be useful to some people to just understand this behaviour and to possibly help you with some performance issues you may have. I do not believe there is too much in the way of documentation particularly around WSE2 and ASMX in this area so again another bit of ammunition for migrating to WCF. I'll try to do a follow up post with the sample for WCF to show how this changes things.

    Read the article

  • Bash History not containing all history and blank after reboot, how to resolve?

    - by TryTryAgain
    I've recently upgraded from 13.04 to 13.10 and realized my terminal bash history is not surviving reboots. cat ~/.bash_history gave me a permissions denied error. I, possibly unnecessarily or wrongly, issued a chmod 777 ~/.bash_history to see if that would help...and although I could then cat and read some contents it contained not much of anything as far as history. I also tried sudo rm ~/.bash_history after reading bash history not being preserved Strangely, after doing that, I typed a few test commands, ls, ls -lah ... and upon pressing the up arrow to go back through history it contained those two commands as well as the odd history from some far off time in the past but very few results and not the hundreds of commands I typed earlier in the day. Is there a new place bash history is stored? How can removing ~/.bash_history not get rid of the commands that are somehow lingering? I am not certain, but I believe my root bash history is acting normal. My user bash history is what's causing me trouble. Any help and guidance in tracking down and solving this problem is appreciated.

    Read the article

  • Leadership does not see value in standard process for machine configuration and new developer orientation

    - by opensourcechris
    About 3 months ago our lead web developer and designer(same person) left the company, greener pastures was the reason for leaving. Good for them I say. My problem is that his department was completely undocumented. Things have been tough since the lead left, there is a lot of knowledge both theoretical knowledge we use to quote new projects and technical/implementation knowledge of our existing products that we have lost as a result of his departure. My normal role is as a product manager (for our products themselves) and as a business analyst for some of our project based consulting work. I've taught myself to code over the past year and in an effort to continue moving forward I've taken on the task of setting my laptop up as a development machine with hopes of implementing some of the easier feature requests and fixing some of the no brainer bugs that get submitted into our ticketing system. But, no one knows how to take a fresh Windows machine and configure it to work seamlessly with our production apps. I have requested my boss, who is still in contact with the developer who left, ask them to document and create a process to onboard a new developer, software installation, required packages, process to deploy to the productions application servers, etc. None of this exists, and I'm spinning my wheels trying to get my computer working as a functional development machine. But she does not seem to understand the need for such a process to exist. Apparently the new developer who replaced the one who left has been using a machine that was pre-configured for our environment, so even the new developer could not set up a new machine if we added another developer. My question is two part: Am I wrong in assuming a process to on-board and configure a new computer to be part of our development eco-system should exist? Am I being a whinny baby and should I figure the process out and create a document on my own?

    Read the article

  • How to build MVC Views that work with polymorphic domain model design?

    - by Johann de Swardt
    This is more of a "how would you do it" type of question. The application I'm working on is an ASP.NET MVC4 app using Razor syntax. I've got a nice domain model which has a few polymorphic classes, awesome to work with in the code, but I have a few questions regarding the MVC front-end. Views are easy to build for normal classes, but when it comes to the polymorphic ones I'm stuck on deciding how to implement them. The one (ugly) option is to build a page which handles the base type (eg. IContract) and has a bunch of if statements to check if we passed in a IServiceContract or ISupplyContract instance. Not pretty and very nasty to maintain. The other option is to build a view for each of these IContract child classes, breaking DRY principles completely. Don't like doing this for obvious reasons. Another option (also not great) is to split the view into chunks with partials and build partial views for each of the child types that are loaded into the main view for the base type, then deciding to show or hide the partial in a single if statement in the partial. Also messy. I've also been thinking about building a master page with sections for the fields that only occur in subclasses and to build views for each subclass referencing the master page. This looks like the least problematic solution? It will allow for fairly simple maintenance and it doesn't involve code duplication. What are your thoughts? Am I missing something obvious that will make our lives easier? Suggestions?

    Read the article

  • Hide email adress with JavaScript

    - by Martin Aleksander
    I read somewhere that hiding email address behind JavaScript code, could reduce spam bots harvesting the email address. <script language="javascript" type="text/javascript"> var a = "Red"; var t = "no"; var doc = document; var b = "ITpro"; var ad = a; ad += "@"; ad += b; ad += "."; ad += t; var mt = "ma"; mt += "il"; mt += "to"; var text = ""; if (text == null || text.length == 0) text = ad; doc.write("<"+"a hr"+"ef=\""+mt+":"+ad+"\">"+text+"</"+"a>"); </script> This will not display the actual email-address in the sourcecode of the page, but it will display and work like a normal link for human users. Is it any point of doing this? Will it reduce spam bots, or is it just nonsense that might slow down performance of the page because of the JavaScript?

    Read the article

  • CLR 4.0: Corrupted State Exceptions

    - by Scott Dorman
    Corrupted state exceptions are designed to help you have fewer bugs in your code by making it harder to make common mistakes around exception handling. A very common pattern is code like this: public void FileSave(String name) { try { FileStream fs = new FileStream(name, FileMode.Create); } catch (Exception e) { MessageBox.Show("File Open Error"); throw new Exception(IOException); } The standard recommendation is not to catch System.Exception but rather catch the more specific exceptions (in this case, IOException). While this is a somewhat contrived example, what would happen if Exception were really an AccessViolationException or some other exception indicating that the process state has been corrupted? What you really want to do is get out fast before persistent data is corrupted or more work is lost. To help solve this problem and minimize the chance that you will catch exceptions like this, CLR 4.0 introduces Corrupted State Exceptions, which cannot be caught by normal catch statements. There are still places where you do want to catch these types of exceptions, particularly in your application’s “main” function or when you are loading add-ins.  There are also rare circumstances when you know code that throws an exception isn’t dangerous, such as when calling native code. In order to support these scenarios, a new HandleProcessCorruptedStateExceptions attribute has been added. This attribute is added to the function that catches these exceptions. There is also a process wide compatibility switch named legacyCorruptedStateExceptionsPolicy which when set to true will cause the code to operate under the older exception handling behavior. Technorati Tags: CLR 4.0, .NET 4.0, Exception Handling, Corrupted State Exceptions

    Read the article

  • How do I properly set up and secure a production LAMP Server?

    - by Niklas
    It's very hard to find comprehensive information on this subject. Either I found short tutorials on how you perform the installation, as simple as "apt-get install apache2", or outdated tutorials. So I was hoping I could get some professional information from my fellow members of the Ubuntu community :D I have performed a normal Ubuntu Server 11.04 with LAMP, SAMBA and SSH installed through the system installation. But I'm having some trouble setting up virtual hosts and to make the system secure enough to expose the server to the web. I've somewhat followed this tutorial this far. I have 3 sites in /etc/apache2/sites-available which all looks like this except for different site names: <VirtualHost example.com> ServerAdmin webmaster@localhost ServerAlias www.edunder.se DocumentRoot /var/www/sites/example CustomLog /var/log/apache2/www.example.com-access.log combined </VirtualHost> And I have enabled them with the command a2ensite so I have symbolic links in /etc/apache2/sites-enabled. My /etc/hosts file has these lines: 127.0.0.1 localhost 127.0.1.1 Ubuntu.lan Ubuntu 127.0.0.1 localhost.localdomain localhost example.com www.example.com 127.0.0.1 localhost.localdomain localhost example2.com www.example2.com 127.0.0.1 localhost.localdomain localhost example3.com www.example3.com And I can only access one of them from the browser (I have lynx installed on the server for testing purposes) so I guess I haven't set them up properly :) How should I proceed to get a secure and proper setup? I also use MySQL and I think that this tutorial will be enough to set up SSH securely. Please help me understanding Apache configuration better since I'm new to setting up my own server (I've only run XAMPP earlier) and please advise regarding how I should setup a firewall as well :D

    Read the article

  • Why does my terrain turn white when I get close to it?

    - by Starkers
    When I zoom in on my terrain it goes white: The further in I zoom, the greater the whiteness becomes. Is this normal? Is this to speed up rendering or something? Can I turn it off? I'm also getting these error messages in the console over and over again: rc.right != m_GfxWindow-GetWidth() || rc.bottom != m_GfxWindow-GetHeight() and GUI Window tries to begin rendering while something else has not finished rendering! Either you have a recursive OnGUI rendering, or previous OnGUI did not clean up properly. Does this bear any correlation on the issue? Update I create virtual desktops to flit between using the program Deskpot. Turning this program off and restarting has stopped the above errors appearing in the console. However, I still get white terrain when I zoom in. Not a single error message. I've restarted my computer to no avail. I have an Asus NVidia GeForce GTX 760 2GB DDR5 Direct CU II OC Edition Graphics Card. Any known issues? Update I don't think it's fog...

    Read the article

  • How can I move a polygon edge 1 unit away from the center?

    - by Stephen
    Let's say I have a polygon class that is represented by a list of vector classes as vertices, like so: var Vector = function(x, y) { this.x = x; this.y = y; }, Polygon = function(vectors) { this.vertices = vectors; }; Now I make a polygon (in this case, a square) like so: var poly = new Polygon([ new Vector(2, 2), new Vector(5, 2), new Vector(5, 5), new Vector(2, 5) ]); So, the top edge would be [poly.vertices[0], poly.vertices[1]]. I need to stretch this polygon by moving each edge away from the center of the polygon by one unit, along that edge's normal. The following example shows the first edge, the top, moved one unit up: The final polygon should look like this new one: var finalPoly = new Polygon([ new Vector(1, 1), new Vector(6, 1), new Vector(6, 6), new Vector(1, 6) ]); It is important that I iterate, moving one edge at a time, because I will be doing some collision tests after moving each edge. Here is what I tried so far (simplified for clarity), which fails triumphantly: for(var i = 0; i < vertices.length; i++) { var a = vertices[i], b = vertices[i + 1] || vertices[0]; // in case of final vertex var ax = a.x, ay = a.y, bx = b.x, by = b.y; // get some new perpendicular vectors var a2 = new Vector(-ay, ax), b2 = new Vector(-by, bx); // make into unit vectors a2.convertToUnitVector(); b2.convertToUnitVector(); // add the new vectors to the original ones a.add(a2); b.add(b2); // the rest of the code, collision tests, etc. } This makes my polygon start slowly rotating and sliding to the left, instead of what I need. Finally, the example shows a square, but the polygons in question could be anything. They will always be convex, and always with vertices in clockwise order.

    Read the article

  • Oracle Inroduces a New Line of Defense for Databases

    - by roxana.bradescu
    Today at the 2011 RSA Conference, we announced the immediate availability of our new Oracle Database Firewall, the latest addition to a comprehensive portfolio of database security solutions. Oracle Database Firewall is a network-based software solution that monitors database traffic, and can detect and block SQL injection and other attacks from reaching Oracle and non-Oracle databases. According to the 2010 Verizon Data Breach Investigations Report, SQL injection attacks against databases are responsible for 89% of all breached data. SQL injection attacks are a technique for controlling responses from the database server through applications. This attack exploits the inherent trust between application layer and the back-end database. Previously the only way organizations had to safeguard against SQL injection attacks was a complete overhaul of their application code. Obviously a very costly, complex, and often impossible undertaking for most organizations. Enter the new Oracle Database Firewall. It can help prevent SQL injection attacks by establishing a defensive perimeter around your databases. The Oracle Database Firewall uses an innovative SQL grammar analysis to inspect the database traffic against pre-defined policies. Normal expected traffic is allowed to pass (and can be optionally logged to demonstrate regulatory compliance), ensuring no false positives or disruption to your business. SQL statements that are explicitly forbidden or unknown SQL statements can either pass, be logged, alert, block or be substitute with pre-defined SQL statements. Being able to substitute an unknown potentially harmful SQL statement with a harmless statement is especially powerful since it foils an attack while allowing the application to operate normally and preventing DoS attacks. So, if you're at RSA, stop by our booth or attend the session with Steve Moyle, Oracle Database Firewall CTO. Or if you want to learn more immediately, please watch our on-demand webcast and download the new Oracle Database Firewall Resource Kit with everything you need to get started today.

    Read the article

  • Access Control and Accessibility in Oracle IRM 11g

    - by martin.abrahams
    A recurring theme you'll find throughout this blog is that IRM needs to balance security with usability and manageability. One of the innovations in Oracle IRM 11g typifies this, as we have introduced a new right that may be included in any role - Accessibility. When creating or modifying a role, you simply select Accessibility along with Open, Print, Edit or whatever rights you want to include in the role. You might, for example, have parallel roles of Reader and Reader with Accessibility and Contributor and Contributor with Accessibility. The effect of the Accessibility right is to relax some of the protection of content in use such that selected users can use accessibility tools. For example, a user with the Accessibility right would be able to use the screen magnification tool, which IRM would ordinarily prevent because it involves screen capture. This new right makes it easy for you to apply security to documents yet, subject to suitable approval processes, cater for the fact that a subset of users might be disproportionately inconvenienced by some of the normal usage constraints. Rather than make those users put up with the restrictions, or perhaps exempt them from using sealed documents altogether, this new right allows you to accommodate them in a controlled manner, and to balance security with corporate accessibility goals.

    Read the article

  • Confused about application submission

    - by snowflake
    I finished my app but I'm totally confused about how to submit it to launchpad and the software center. I'm also new to launchpad. 1) I created a project at launchpad and synchronized an OpenPGP key via the password tool. This was about 30 minutes ago but launchpad still shows "No OpenPGP keys registered." Is this normal? 2) What I have to do next with the PPA? 3) How do I submit my app? Do I have to adjust files with the PPA? The showdown blog shows that quickly submitubuntu is enough but here in the forum I read about quickly release or quickly share. So which one is correct? 4) Do I have to add this file to launchpad as well? 5) Do I get a confirmation that my app was correctly submitted and take part of the contest? Since I do it for first time I want to be sure that everything is ok. I'm sorry for all my questions and I know that there are many topics about it but everywhere something different is written and I'm confused how to proceed. Thanks in advance!

    Read the article

  • Copying files to zfs mountpoint doesn't work - the files aren't actually copied to the other filesystem,

    - by user113904
    I have 3 x 4 TB disks in a NAS that I want to group together and access as if they were one whole 'unit' of some kind. I also have a 250GB disk containing the OS - this is full of films and tv shows currently. I thought zfs sounded good so I created a raidz zpool, after installing the ppa sudo zpool create store raidz /dev/sdb /dev/sdc /dev/sdd and set the mountpoint to /mnt/store sudo zfs set mountpoint=/mnt/store /store checked it was successful - I think it was sudo zfs list NAME USED AVAIL REFER MOUNTPOINT store 266K 7.16T 170K /mnt/store Then I wanted to move over a whole load of files from my home directory. I went to where the to-be-copied folder was (called media) and entered sudo cp -R * /mnt/store cp: cannot create directory `/mnt/store/media': No space left on device It seems like it's not copying over to the new filesystem I made (or thought I did). I've never really done this type of thing until a few days ago so may be running before I can walk... is this not the right way to copy files across? I've only used windows before so the idea of mountpoints is a bit mind boggling. I'm using XBMCbuntu 12 beta 2.0 which is based on 12.04. Will retry with normal Ubuntu 12.04 desktop to see if that's the problem. thanks for the help!

    Read the article

  • Syntactic sugar in PHP with static functions

    - by Anna
    The dilemma I'm facing is: should I use static classes for the components of an application just to get nicer looking API? Example - the "normal" way: // example component class Cache{ abstract function get($k); abstract function set($k, $v); } class APCCache extends Cache{ ... } class application{ function __construct() $this->cache = new APCCache(); } function whatever(){ $this->cache->add('blabla'); print $this->cache->get('blablabla'); } } Notice how ugly is this->cache->.... But it gets waay uglier when you try to make the application extensible trough plugins, because then you have to pass the application instance to its plugins, and you get $this->application->cache->... With static functions: interface CacheAdapter{ abstract function get($k); abstract function set($k, $v); } class Cache{ public static $ad; public function setAdapter(CacheAdapter $a){ static::$ad = $ad; } public static function get($k){ return static::$ad->get($k); } ... } class APCCache implements CacheAdapter{ ... } class application{ function __construct(){ cache::setAdapter(new APCCache); } function whatever() cache::add('blabla', 5); print cache::get('blabla'); } } Here it looks nicer because you just call cache::get() everywhere. The disadvantage is that I loose the possibility to extend this class easily. But I've added a setAdapter method to make the class extensible to some point. I'm relying on the fact that I won't need to rewrite to replace the cache wrapper, ever, and that I won't need to run multiple application instances simultaneously (it's basically a site - and nobody works with two sites at the same time) So, am doing it wrong?

    Read the article

< Previous Page | 291 292 293 294 295 296 297 298 299 300 301 302  | Next Page >