Search Results

Search found 27144 results on 1086 pages for 'tail call optimization'.

Page 602/1086 | < Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >

  • Game: Age of Empires sound good but video "out of range"

    - by Ezekiel
    I'm new to the Ubuntu realm. Currently i'm using Linux Mint 12 with Wine 1.4 and PLAYONLINUX as game loading/playing programs. Video card is MSI GeForce FX5200 (NVIDIA) and is 3d enabled. I can play "Call of Duty 5 demo just fine. My real love is the Age of Empires series games. I loaded the WINE version of AOE 1 demo. No sound and no picture. Black screen with "Out of Range" window in red. I loaded my CD version of AOE 1 through PLAYONLINUX. I get the sound just fine but again the black screen with "Out of Range" window in red. I have used all the monitor settings in both the "settings" and in winecfg. None of the eight monitor options worked in any combination. I have checked all the questions and blogs on this error and tried all I found and no one seems to come up with a real fix. I guess I need to know exactly what the "Out of Range" means. Any help? Anywhere? Thanks

    Read the article

  • Java Embedded @ JavaOne

    - by Tori Wieldt
    Developers, tell your manager (or the other half of your developer-entrepreneur self) about this new event being held Wednesday, Oct. 3th and Thursday, Oct. 4th in San Francisco at the Hotel Nikko (during JavaOne).Java Embedded @ JavaOne is designed to provide business and technical decision makers, as well as Java embedded ecosystem partners, a unique occasion to come together and learn about how they can use Java Embedded technologies for new business opportunities. The ideal audience for this event is business and technical decision makers (e.g. System Integrators, CTO, CXO, Chief Architects/Architects, Business Development Managers, Project Managers, Purchasing managers, Technical Leads, Senior Decision Makers, Practice Leads, R&D Heads, and Development Managers/Leads).A call for papers has gone out, but is ONLY for business-focused submissions. Event organizers are looking for best practices, case studies and panel discussions on emerging opportunities in the Java embedded space. Please consider submitting a paper. The deadline for submission is July 18.Attendees of both JavaOne and Oracle Openworld can attend Java Embedded @ JavaOne by purchasing a $100.00 USD upgrade to their full conference pass. Rates for attending Embedded @ JavaOne alone are here.

    Read the article

  • Oracle ADF and Simplified UI Apps: I18n Feng Shui on Display

    - by ultan o'broin
    I demoed the Hebrew language version of Oracle Sales Cloud Release 8 live in Israel recently. The crowd was yet again wowed by the simplified UI (SUI). I’ve now spent some time playing around with most of the 23 language versions, or the NLS (Natural Language Support) versions as we’d call them, available in Release 8. Hebrew Oracle Sales Cloud Release 8 The simplified UI is built using 100% Oracle ADF. This framework is a great solution for developers to productively build tablet-first, mobility-driven apps for users who work and live using natural languages other than English. Oracle ADF’s internationalization (i18n) relies on built-in Java and Unicode,  packing in i18n goodness such as Bi-Di (or bi-directional) flipping of pages, locale-enabled resource bundles, date and time support, and so on. Comparing German (left) and Hebrew Bi-Di (right) page components in the simplified UI. Note the change in the direction of the arrows and positions of the text. So, developers who need to build global apps don’t have to do anything special when using Oracle ADF components, all thanks to the baked-in UX Feng Shui, as Grant Ronald of the ADF team would say to the UK Oracle User Group. Find out more  about  ADF i18n from Frédéric Desbiens (@blueberrycoder)  on the ADF Architecture TV channel.

    Read the article

  • Determine All SQL Server Table Sizes

    Im doing some work to migrate and optimize a large-ish (40GB) SQL Server database at the moment.  Moving such a database between data centers over the Internet is not without its challenges.  In my case, virtually all of the size of the database is the result of one table, which has over 200M rows of data.  To determine the size of this table on disk, you can run the sp_TableSize stored procedure, like so: EXEC sp_spaceused lq_ActivityLog This results in the following: Of course this is only showing one table if you have a lot of tables and need to know which ones are taking up the most space, it would be nice if you could run a query to list all of the tables, perhaps ordered by the space theyre taking up.  Thanks to Mitchel Sellers (and Gregg Starks CURSOR template) and a tiny bit of my own edits, now you can!  Create the stored procedure below and call it to see a listing of all user tables in your database, ordered by their reserved space. -- Lists Space Used for all user tablesCREATE PROCEDURE GetAllTableSizesASDECLARE @TableName VARCHAR(100)DECLARE tableCursor CURSOR FORWARD_ONLYFOR select [name]from dbo.sysobjects where OBJECTPROPERTY(id, N'IsUserTable') = 1FOR READ ONLYCREATE TABLE #TempTable( tableName varchar(100), numberofRows varchar(100), reservedSize varchar(50), dataSize varchar(50), indexSize varchar(50), unusedSize varchar(50))OPEN tableCursorWHILE (1=1)BEGIN FETCH NEXT FROM tableCursor INTO @TableName IF(@@FETCH_STATUS<>0) BREAK; INSERT #TempTable EXEC sp_spaceused @TableNameENDCLOSE tableCursorDEALLOCATE tableCursorUPDATE #TempTableSET reservedSize = REPLACE(reservedSize, ' KB', '')SELECT tableName 'Table Name',numberofRows 'Total Rows',reservedSize 'Reserved KB',dataSize 'Data Size',indexSize 'Index Size',unusedSize 'Unused Size'FROM #TempTableORDER BY CONVERT(bigint,reservedSize) DESCDROP TABLE #TempTableGO Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • #OOW 2012 : IaaS, Private Cloud, Multitenant Database, and X3H2M2

    - by Eric Bezille
    The title of this post is a summary of the 4 announcements made by Larry Ellison today, during the opening session of Oracle Open World 2012... To know what's behind X3H2M2, you will have to wait a little, as I will go in order, beginning with the IaaS - Infrastructure as a Service - announcement. Oracle IaaS goes Public... and Private... Starting in 2004 with Fusion development, Oracle Cloud was launch last year to provide not only SaaS Application, based on standard development, but also the underlying PaaS, required to build the specifics, and required interconnections between applications, in and outside of the Cloud. Still, to cover the end-to-end Cloud  Services spectrum, we had to provide an Infrastructure as a Service, leveraging our Servers, Storage, OS, and Virtualization Technologies, all "Engineered Together". This Cloud Infrastructure, was already available for our customers to build rapidly their own Private Cloud either on SPARC/Solaris or x86/Linux... The second announcement made today bring that proposition a big step further : for cautious customers (like Banks, or sensible industries) who would like to benefits from the Cloud value of "as a Service", but don't want their Data out in the Cloud... We propose to them to operate the same systems, Exadata, Exalogic & SuperCluster, that are providing our Public Cloud Infrastructure, behind their firewall, in a Private Cloud model. Oracle 12c Multitenant Database This is also a major announcement made today, on what's coming with Oracle Database 12c : the ability to consolidate multiple databases with no extra additional  cost especially in terms of memory needed on the server node, which is often THE consolidation limiting factor. The principle could be compare to Solaris Zones, where, you will have a Database Container, who is "owning" the memory and Database background processes, and "Pluggable" Database in this Database Container. This particular feature is a strong compelling event to evaluate rapidly Oracle Database 12c once it will be available, as this is major step forward into true Database consolidation with Multitenancy on a shared (optimized) infrastructure. X3H2M2, enabling the new Exadata X3 in-Memory Database Here we are :  X3H2M2 stands for X3 (the new version of Exadata announced also today) Heuristic Hierarchical Mass Memory, providing the capability to keep most if not all the Data in the memory cache hierarchy. Of course, this is the major software enhancement of the new X3 Exadata machine, but as this is a software, our current customers would be able to benefit from it on their existing systems by upgrading to the new release. But that' not the only thing that we did with X3, at the same time we have upgraded everything : the CPUs, adding more cores per server node (16 vs. 12, with the arrival of Intel E5 / Sandy Bridge), the memory with 512GB memory as well per node,  and the new Flash Fire card, bringing now up to 22 TB of Flash cache. All of this 4TB of RAM + 22TB of Flash being use cleverly not only for read but also for write by the X3H2M2 algorithm... making a very big difference compare to traditional storage flash extension. But what does those extra performances brings to you on an already very efficient system: double your performances compare to the fastest storage array on the market today (including flash) and divide you storage price x10 at the same time... Something to consider closely this days... Especially that we also announced the availability of a new Exadata X3-2 8th rack : a good starting point. As you have seen a major opening for this year again with true innovation. But that was not the only thing that we saw today, as before Larry's talk, Fujitsu did introduce more in deep the up coming new SPARC processor, that they are co-developing with us. And as such Andrew Mendelsohn - Senior Vice President Database Server Technologies came on stage to explain that the next step after I/O optimization for Database with Exadata, was to accelerate the Database at execution level by bringing functions in the SPARC processor silicium. All in all, to process more and more Data... The big theme of the day... and of the Oracle User Groups Conferences that were also happening today and where I had the opportunity to attend some interesting sessions on practical use cases of Big Data one in Finances and Fraud profiling and the other one on practical deployment of Oracle Exalytics for Data Analytics. In conclusion, one picture to try to size Oracle Open World ... and you can understand why, with such a rich content... and this only the first day !

    Read the article

  • How to handle new domain names?

    - by michael
    I have a new product which I'll call a pen ink reloader. I have a website using my products name, for example, www.inkywink.com which I want to have accessed by searches for keywords such as "pen ink", "pen out of ink" "ink for pens" etc. , since nobody knows that a pen ink reloader exists. I see that its quite difficult to get on front page for these keywords since they have lots of competition. However I notice that the exact phrases I want to rank highly for are available as domains. I purchase "www.penink.com" and "penoutofink.com" which for arguments sake are highly searched and the perfect keywords to get eyes on my money site www.inkywink.com . Two questions: 1. What is my best option to leverage those names so that they appear near top of searches so that I can get traffic to my money site? Do I just have them redirect 301 to inkywink.com or should I create small original content on each with links to my main site? 2. If I just have them redirected to inkywink.com, am I able to use keywords in metatag and headers for each site separately or do they all automatically obtain the same headers and tags as the site to which theyre redirected ? Thanks to anyone who can help as I'm a real newbie to all this.

    Read the article

  • Am I an idealist?

    - by ereOn
    This is not only a question, this is also a call for help. Since I started my career as a programmer, I always tried to learn from my mistakes. I worked hard to learn best-practices and while I don't consider myself a C++ expert, I still believe I'm not a beginner either. I was recently hired into a company for C++ development. There I was told that my way to work was "against the rules" and that I would have to change my mind. Here are the topics I disagree with my hierarchy (their words): "You should not use separate header files for your different classes. One big header file is both easier to read and faster to compile." "Trying to use different headers is counter-productive : use the same super-set of headers everywhere, and enforce the use #pragma hdrstop to hasten compilation" "You may not use Boost or any other library that uses nested directories to organize its files. Our build-machine doesn't work with nested directories. Moreover, you don't need Boost to create great software." One might think I'm somehow exaggerated things, but the sad truth is that I didn't. That's their actual words. I believe that having separate files enhance maintainability and code-correctness and can fasten compilation time by the use of the proper includes. Have you been in a similar situation? What should I do? I feel like it's actually impossible for me to work that way and day after day, my frustration grows.

    Read the article

  • What's the proper term for a function inverse to a constructor - to unwrap a value from a data type?

    - by Petr Pudlák
    Edit: I'm rephrasing the question a bit. Apparently I caused some confusion because I didn't realize that the term destructor is used in OOP for something quite different - it's a function invoked when an object is being destroyed. In functional programming we (try to) avoid mutable state so there is no such equivalent to it. (I added the proper tag to the question.) Instead, I've seen that the record field for unwrapping a value (especially for single-valued data types such as newtypes) is sometimes called destructor or perhaps deconstructor. For example, let's have (in Haskell): newtype Wrap = Wrap { unwrap :: Int } Here Wrap is the constructor and unwrap is what? The questions are: How do we call unwrap in functional programming? Deconstructor? Destructor? Or by some other term? And to clarify, is this/other terminology applicable to other functional languages, or is it used just in the Haskell? Perhaps also, is there any terminology for this in general, in non-functional languages? I've seen both terms, for example: ... Most often, one supplies smart constructors and destructors for these to ease working with them. ... at Haskell wiki, or ... The general theme here is to fuse constructor - deconstructor pairs like ... at Haskell wikibook (here it's probably meant in a bit more general sense), or newtype DList a = DL { unDL :: [a] -> [a] } The unDL function is our deconstructor, which removes the DL constructor. ... in The Real World Haskell.

    Read the article

  • How does using a LGPL gem affect my MIT licensed application?

    - by corsen
    I am developing an open source ruby application under the MIT license. I am using this license because I don't want to place any restrictions on the users of the application. Also I can actually read and understand this license. I recently started using another ruby gem in my project (require "somegem"). This ruby gem is under the LGPL license. Do I have to change anything about my project because I am using this other ruby gem that is licensed with LGPL? My project does not contain the source code for the other gem and it is not shipped with my project. It is simply listed as a dependency so that ruby gems will install it and my project will call into it from my code. Additionally, it would be helpful to know if there are any licenses I need to "watch out for" because using them would affect the license of my project. There are some other post about this topic but phrased in different ways. Since I find this license stuff tricky I am hoping to get a answer directed at my situation. Thank you, Corsen

    Read the article

  • Easter eggs as IP protection in software

    - by Simon
    I work in embedded software, and for some reason, management wants to hide an Easter egg as means of IP protection. They call it a watermark, and since our software interact with the video preview feed (the image displayed on a screen before you take a photo), they want me to implement a trigger which will react to some unusual video input (a video konami code like dark - bright - dark - bright - whatever). When this trigger fires, something strange happens (which is outside of the normal behavior of the software). The goal is to check whether our software is included in a device. Does it sound like a good idea? I have many argument against this move: What if the konami code is too sensitive and user triggers it? Does this kind of watermark have any legal value? What if this "feature" is discovered by the client? The performance penalty should be very small, since the soft run on small devices. I am the one developping this trigger. If things go wrong, what is my responsibility? What is your opinion about this method? I can't find a link, but I remember seeing an answer on this site suggesting that putting Easter eggs for protection purpose was a good idea. Has anyone tried it with good results?

    Read the article

  • Java Embedded @ JavaOne

    - by Tori Wieldt
    Developers, tell your manager (or the other half of your developer-entrepreneur self) about this new event being held Wednesday, Oct. 3th and Thursday, Oct. 4th in San Francisco at the Hotel Nikko (during JavaOne).Java Embedded @ JavaOne is designed to provide business and technical decision makers, as well as Java embedded ecosystem partners, a unique occasion to come together and learn about how they can use Java Embedded technologies for new business opportunities. The ideal audience for this event is business and technical decision makers (e.g. System Integrators, CTO, CXO, Chief Architects/Architects, Business Development Managers, Project Managers, Purchasing managers, Technical Leads, Senior Decision Makers, Practice Leads, R&D Heads, and Development Managers/Leads).A call for papers has gone out, but is ONLY for business-focused submissions. Event organizers are looking for best practices, case studies and panel discussions on emerging opportunities in the Java embedded space. Please consider submitting a paper. The deadline for submission is July 18.Attendees of both JavaOne and Oracle Openworld can attend Java Embedded @ JavaOne by purchasing a $100.00 USD upgrade to their full conference pass. Rates for attending Embedded @ JavaOne alone are here.

    Read the article

  • Collision planes confusion

    - by Jeffrey
    I'm following this tutorial by thecplusplusguy and in the linked video he explain that for example for the world basement and walls we need to create the actual rendered (shown to the player) walls and then duplicate them, place them in the same coordinates as the rendered walls and call them collision (by defining their material to collision). Then it defines in the Object loader function that those objects with material == collision are collision planes and should not be rendered but just used to check collision. Now I'm pretty confused. Why would we add this kind of complexity to a problem that can easily be solved by a simple loadObject(string plane_object, bool check_collision);: Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true In this case we have lowered the complexity of his method and make it more flexible and faster to develop (faster because we don't always have to make a copy for each plane and flexible because we don't hardcode the Object loader). The only case in which this method could not work is when we need hidden collision planes, and for that we could modify the loadObject() function like this: loadObject(string plane_object, bool check_collision = true, bool hide_object = false); Creating only the walls object (by loading .obj file in plane_object) Define them also as collision planes whenever the check_collision is set to true And add the ability to actually show the object or hide it based on hide_object. The final question is: am I right? What would the possible problem encountered with my solution versus his?

    Read the article

  • It could be worse....

    - by Darryl Gove
    As "guest" pointed out, in my file I/O test I didn't open the file with O_SYNC, so in fact the time was spent in OS code rather than in disk I/O. It's a straightforward change to add O_SYNC to the open() call, but it's also useful to reduce the iteration count - since the cost per write is much higher: ... #define SIZE 1024 void test_write() { starttime(); int file = open("./test.dat",O_WRONLY|O_CREAT|O_SYNC,S_IWGRP|S_IWOTH|S_IWUSR); ... Running this gave the following results: Time per iteration 0.000065606310 MB/s Time per iteration 2.709711563906 MB/s Time per iteration 0.178590114758 MB/s Yup, disk I/O is way slower than the original I/O calls. However, it's not a very fair comparison since disks get written in large blocks of data and we're deliberately sending a single byte. A fairer result would be to look at the I/O operations per second; which is about 65 - pretty much what I'd expect for this system. It's also interesting to examine at the profiles for the two cases. When the write() was trapping into the OS the profile indicated that all the time was being spent in system. When the data was being written to disk, the time got attributed to sleep. This gives us an indication how to interpret profiles from apps doing I/O. It's the sleep time that indicates disk activity.

    Read the article

  • Validate that a Checkbox is checked using javascript

    - by H(at)Ni
    I was facing a challenge yesterday that I was creating a Visual webpart and I wanted to validate the a submit button is only visible if the user checked a "I agree to terms" checkbox. Something was weired that I tested my code on a normal asp.net website and it worked perfectly while it had a different behaviour inside the webpart which is whenever I check the checkbox, the button is enabled but it will not fire the asp.net validators in client side. It posts back the page and then the validators appear after that. So, I tried to change my type of thinking and I reached a different solution is that to call a javascript function whenever the button is clicked and then check if the checkbox is clicked or not. To illustrate more, here are an example to what I'm saying: 1. Button in aspx page: <asp:Button OnClientClick="CheckForCondition();"  ValidationGroup="CompaniesSection" ID="btnCompaniesSubmit"                         runat="server" Text="Submit" /> 2. CheckForCondition() function: <script language="javascript" type="text/javascript">                         function CheckForCondition() {                             if ($jq('#<%= ChkCompanyCheck.ClientID %>:checked').val() == undefined) {                                 $jq('#lblCheckBox').show();                                 return false;                             }                             else {                                 $jq('#lblCheckBox').hide();                                 return true;                             }                         }                      </script> 3. lblCheckBox is simply a label that shows a red asterisk beside the checkbox to indicate that it's a required field. <label id="lblCheckBox" style="color:Red;display:none">*</label>

    Read the article

  • Efficiently rendering to 3D texture

    - by TravisG
    I have an existing depth texture and some other color textures, and want to process the information in them by rendering to a 3D texture (based on the depth contained in the depth texture, i.e. a point at (x/y) in the depth texture will be rendered to (x/y/texture(depth,uv)) in the 3D texture). Simply doing one manual draw call for each slice of the 3D texture (via glFramebufferTextureLayer) is terribly slow, since I don't know beforehand to what slice of the 3D texture a given texel from one of the color textures or the depth texture belongs. This means the entire process is effectively for each slice for each texel in depth texture process color textures and render to slice So I have to sample the depth texture completely per each slice, and I also have to go through the processing (at least until to discard;) for all texels in it. It would be much faster if I could rearrange the process to for each texel in depth texture figure out what slice it should end up in process color textures and render to slice Is this possible? If so, how? What I'm actually trying to do: the color textures contain lighting information (as seen from light view, it's a reflective shadow map). I want to accumulate that information in the 3D texture and then later use it to light the scene. More specifically I'm trying to implement Cryteks Light Propagation Volumes algorithm.

    Read the article

  • Interpolation between two 3D points?

    - by meds
    I'm working with some splines which define a path a character follows (you can see a gameplay video here to get a better understanding of what's going on: http://www.youtube.com/watch?v=BndobjOiZ6g). Basically the characters 'forward' look direction is set to the 'forward' direction of the spline and when players tilt their phone left and right the character is strafed along its 'right' coordinate. The issue with this is (rather obviously) in performance, interpolating over a spline to find the nearest position and tangent relative to the player is an incredibly costly operation. To get by this I cache a finite number of positions in what I call 'SplineDetails', the class is as follows: public class SplineDetails { public SplineDetails() { Forward = Vector3.forward; Position = Vector3.one * float.MaxValue; Alpha = -1; } public float Alpha; // [0,1] measured along length of spline where 0 is the initial point and 1 is the end point of the spline public Vector3 Position; // the point of the spline at this alpha public Vector3 Forward; // the forward tangent of the spline at this alpha } I populate this with say 30 coordinates and I can give a rough estimate of a coordinate and 'forward' based on a position past in. It's not as accurate but it's much faster. But now I'd like to make the system work better by estimating positions and 'forward' directions by interpolating between two of the cached points though I'm stuck trying to figure out some logic. My first problem is, how can I determine between which two points the object is? Given each point can be placed at different intervals along the spline it could mean that two points in front or behind the object can be closer to the object. The other problem is to figure out the proportion between the two paths it's between, i.e. if there is a point a at coordinate (0,0,0) and point b at coordinate (1,0,0) if the object is at position (0.5,0,0) then the result it should give is '0.5' (as it is equal distance away from point a and point b). That's a simple example, but what if the object is at coordinate (0.5,3,0) for example?

    Read the article

  • Port Forwarding on Actiontec GT704-WG Router Issues

    - by adamweeks
    I am trying to setup a server at customer's location that has the Actiontec GT704-WG DSL router. The port forwarding it not working at all. Here's the details: Server: OpenSuse Linux box with a static IP address of 192.168.1.200 Application running accepting connections on port 8060 Firewall disabled Local connections (within the network) working properly Router: Updated to latest firmware available DHCP range set to 192.168.1.69-192.168.1.199 to not have any conflicts with the server Firewall set to "off" Rule set in the "Applications" setting to forward 8060 TCP and UDP to 192.168.1.200 machine (I've tried using the "TCP,UDP" option as well as both individual options) I've also tried just simply putting the server in the DMZ to see if I could connect to anything, but still nothing. Looking for any clues before I call and waste hours explaining the issue to tech support.

    Read the article

  • Connecting to a LDAPS server

    - by Pavanred
    I am working on a development machine and I am trying to connect to my LDAP server. This is what I do - telnet ldaps- 686 then the response is - Could not open connection to the host on port 686 : connect failed But, the strange part is when I connect to my server - telnet ldap- 389 then the connection is successful. My question is, why does this happen? Do I have to install SSL certificate on the client machine where I make the call from? I do not know much about this. I know for a fact that the LDAP server is working fine because other applications are successfully using it currently.

    Read the article

  • How to Script a backup for each database on an MSSQL Engine?

    - by Geo
    We need to backup 40 databases inside an MS SQL Server Engine. We backup each database with the following script: BACKUP DATABASE [dbname1] TO DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH NOFORMAT, INIT, NAME = N'dbname1-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10 GO declare @backupSetId as int select @backupSetId = position from msdb..backupset where database_name=N'dbname1' and backup_set_id=(select max(backup_set_id) from msdb..backupset where database_name=N'dbname1' ) if @backupSetId is null begin raiserror(N'Verify failed. Backup information for database ''dbname1'' not found.', 16, 1) end RESTORE VERIFYONLY FROM DISK = N'J:\SQLBACKUPS\dbname1.bak' WITH FILE = @backupSetId, NOUNLOAD, NOREWIND GO We will like to add to the script the functionality of taking each database and replacing it in the above script. Basically a script that will create and verify each database backup from an engine. I am looking for something like this: For each database in database-list sp_backup(database) // this is the call to the script above. End For any ideas?

    Read the article

  • Passing Parameters to an ADF Page through the URL - Part 2.

    - by shay.shmeltzer
    I showed before how to pass a parameter on the URL when invoking a taskflow (where the taskflow starts with a method call and then a page). However in some simpler scenarios you don't actually need a full blown taskflow. Instead you can use page level parameters defined for your page in the adfc-config.xml file. So below is a demo of this technique. I'm also taking advantage of this video to show the concept of a view object level service method and how to invoke it from your page. P.S. You might wonder - why not just reference #{param.amount} as the value set for the method parameter? Why do I need to copy it into a viewScope parameter? The advantage of placing the value in the viewScope is that it is available even when the page went through several sumbits. For example if you switch the "partialSumbit" property of the "Next" button to false in the above example - the minute that you press the button to go to the next department - the param.amount value is gone. However the ViewScope is still there as long as you stay on this page.

    Read the article

  • Restore Failure from Ubuntu One

    - by Qawi Robinson
    Had to do a reinstall of Ubuntu 12 after 13.10 failed. Lost all my data, but I remembered that I had data backed up to Ubuntu One. It recognized my previous backups but I got errors and a restore failure when I went to restore the data. This is what I got. Can anyone make heads or tails of this? I still don't have my data. Thanks. Traceback (most recent call last): File "/usr/bin/duplicity", line 1412, in <module> with_tempdir(main) File "/usr/bin/duplicity", line 1405, in with_tempdir fn() File "/usr/bin/duplicity", line 1339, in main restore(col_stats) File "/usr/bin/duplicity", line 630, in restore restore_get_patched_rop_iter(col_stats)): File "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py", line 522, in Write_ROPaths for ropath in rop_iter: File "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py", line 495, in integrate_patch_iters final_ropath = patch_seq2ropath( normalize_ps( patch_seq ) ) File "/usr/lib/python2.7/dist-packages/duplicity/patchdir.py", line 475, in patch_seq2ropath misc.copyfileobj( current_file, tempfp ) File "/usr/lib/python2.7/dist-packages/duplicity/misc.py", line 166, in copyfileobj buf = infp.read(blocksize) File "/usr/lib/python2.7/dist-packages/duplicity/librsync.py", line 80, in read self._add_to_outbuf_once() File "/usr/lib/python2.7/dist-packages/duplicity/librsync.py", line 94, in _add_to_outbuf_once raise librsyncError(str(e)) librsyncError: librsync error 103 while in patch cycle

    Read the article

  • Public JCP EC Meeting - 26 June

    - by heathervc
    The first 2012 public JCP Executive Committee (EC) Teleconference Meeting is scheduled for next Tuesday, 26 June at 8:00 AM Pacific Time (PDT).  This meeting is open to the participation of all (members and non-members).  JCP 2.8 (JSR 348) set the requirement for the JCP to hold two public teleconferences each year for the developer community to meet with the JCP EC.  There will also be a public EC Face to Face Meeting during the 2012 JavaOne Conference; details to follow soon.  The meeting details for Tuesday morning are below.  Please participate! Meeting details Date & Time Tuesday June 26, 2012, 8:00 - 9:00 am PDT Location Teleconference Dial-in +1 (866) 682-4770 Conference code: 627-9803 Security code: 52732 ("JCPEC" on your phone handset) For global access numbers see http://www.intercall.com/oracle/access_numbers.htm Or +1 (408) 774-4073 WebEx Browse for the meeting from https://jcp.webex.com No registration required (enter your name and email address) Password : 52732 Agenda JCP.next status: overview of JSRs 355 and 358 JCP events at JavaOne Annual awards Improving communications between the EC and the community Q&A Note The call will be recorded and the recording published on jcp.org, so those who are unable to join in real-time will still be able to participate.

    Read the article

  • Linux IPv6: DHCP and /127 prefixes

    - by Jeff Ferland
    I've tried multiple pieces of DHCP client and software in attempting to setup a solution for allocating a /127 prefix to virtual machines so that each maintains its own layer 2 isolation. Because there would only be one host assigned to each network, a /64 is impractical. While the prefix size could reasonably be somewhere in the /64-127 range, the crux of the problem has been the same regardless of the software used in configuring: the DHCP call to bring up the interface uses the address advertised by DHCPv6 and inserts two routes: the /127 given by the router advertising packets and a /64 as well. Any thoughts on why I'm getting the additional route added across dhcp client vendors?

    Read the article

  • Network email alerts

    - by stephenfalken
    I would like to buy or d/l an application that will alert our sysadmin in the event of network failure, like via email and/or phone call. I've looked around the net before posting this question, but I have one main concern: if our network goes down, how can a network monitoring application send email to alert us that it's down? Doesn't this have to be an external application that looks at our network from outside? Well anyway, I want to find something that will handle this for us - freeware if possible, sice we're not looking for heavy analysis - just simple alerts. Advice is appreciated.

    Read the article

  • The Importance of Fully Specifying a Problem

    - by Alan
    I had a customer call this week where we were provided a forced crashdump and asked to determine why the system was hung. Normally when you are looking at a hung system, you will find a lot of threads blocked on various locks, and most likely very little actually running on the system (unless it's threads spinning on busy wait type locks). This vmcore showed none of that. In fact we were seeing hundreds of threads actively on cpu in the second before the dump was forced. This prompted the question back to the customer: What exactly were you seeing that made you believe that the system was hung? It took a few days to get a response, but the response that I got back was that they were not able to ssh into the system and when they tried to login to the console, they got the login prompt, but after typing "root" and hitting return, the console was no longer responsive. This description puts a whole new light on the "hang". You immediately start thinking "name services". Looking at the crashdump, yes the sshds are all in door calls to nscd, and nscd is idle waiting on responses from the network. Looking at the connections I see a lot of connections to the secure ldap port in CLOSE_WAIT, but more interestingly I am seeing a few connections over the non-secure ldap port to a different LDAP server just sitting open. My feeling at this point is that we have an either non-responding LDAP server, or one that is responding slowly, the resolution being to investigate that server. Moral When you log a service ticket for a "system hang", it's great to get the forced crashdump first up, but it's even better to get a description of what you observed to make to believe that the system was hung.

    Read the article

< Previous Page | 598 599 600 601 602 603 604 605 606 607 608 609  | Next Page >