Search Results

Search found 12043 results on 482 pages for 'dynamically generated'.

Page 440/482 | < Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >

  • How do I use jQueryUI's ui-states with JSF2's h:messages?

    - by Andrew
    It seems like it should be very simple to specify that the h:messages generated by JSF should be styled using jQueryUI's nice ui-states. But sadly I can't make it fit. It seems that the jQueryUI states require several elements (div,div,p,span) in order to make it work. So taking inspiration directly from the jQueryUI theme demo page: <!-- Highlight / Error --> <h2 class="demoHeaders">Highlight / Error</h2> <div class="ui-widget"> <div class="ui-state-highlight ui-corner-all" style="margin-top: 20px; padding: 0 .7em;"> <p><span class="ui-icon ui-icon-info" style="float: left; margin-right: .3em;"></span> <strong>Hey!</strong> Sample ui-state-highlight style.</p> </div> </div> <br/> <div class="ui-widget"> <div class="ui-state-error ui-corner-all" style="padding: 0 .7em;"> <p><span class="ui-icon ui-icon-alert" style="float: left; margin-right: .3em;"></span> <strong>Alert:</strong> Sample ui-state-error style.</p> </div> </div> and trying to jam the css class details into my h:message as best I can: <div class="ui-widget"> <h:messages globalOnly="true" errorClass="ui-state-error ui-corner-all ui-icon-alert" infoClass="ui-state-highlight ui-corner-all ui-icon-info"/> </div> I don't get the icon or sufficient padding etc but the colours make it through. So, the styles are being applied but they aren't working as intended. Any idea how I can make this work?

    Read the article

  • VFS: file-max limit 1231582 reached

    - by Rick Koshi
    I'm running a Linux 2.6.36 kernel, and I'm seeing some random errors. Things like ls: error while loading shared libraries: libpthread.so.0: cannot open shared object file: Error 23 Yes, my system can't consistently run an 'ls' command. :( I note several errors in my dmesg output: # dmesg | tail [2808967.543203] EXT4-fs (sda3): re-mounted. Opts: (null) [2837776.220605] xv[14450] general protection ip:7f20c20c6ac6 sp:7fff3641b368 error:0 in libpng14.so.14.4.0[7f20c20a9000+29000] [4931344.685302] EXT4-fs (md16): re-mounted. Opts: (null) [4982666.631444] VFS: file-max limit 1231582 reached [4982666.764240] VFS: file-max limit 1231582 reached [4982767.360574] VFS: file-max limit 1231582 reached [4982901.904628] VFS: file-max limit 1231582 reached [4982964.930556] VFS: file-max limit 1231582 reached [4982966.352170] VFS: file-max limit 1231582 reached [4982966.649195] top[31095]: segfault at 14 ip 00007fd6ace42700 sp 00007fff20746530 error 6 in libproc-3.2.8.so[7fd6ace3b000+e000] Obviously, the file-max errors look suspicious, being clustered together and recent. # cat /proc/sys/fs/file-max 1231582 # cat /proc/sys/fs/file-nr 1231712 0 1231582 That also looks a bit odd to me, but the thing is, there's no way I have 1.2 million files open on this system. I'm the only one using it, and it's not visible to anyone outside the local network. # lsof | wc 16046 148253 1882901 # ps -ef | wc 574 6104 44260 I saw some documentation saying: file-max & file-nr: The kernel allocates file handles dynamically, but as yet it doesn't free them again. The value in file-max denotes the maximum number of file- handles that the Linux kernel will allocate. When you get lots of error messages about running out of file handles, you might want to increase this limit. Historically, the three values in file-nr denoted the number of allocated file handles, the number of allocated but unused file handles, and the maximum number of file handles. Linux 2.6 always reports 0 as the number of free file handles -- this is not an error, it just means that the number of allocated file handles exactly matches the number of used file handles. Attempts to allocate more file descriptors than file-max are reported with printk, look for "VFS: file-max limit reached". My first reading of this is that the kernel basically has a built-in file descriptor leak, but I find that very hard to believe. It would imply that any system in active use needs to be rebooted every so often to free up the file descriptors. As I said, I can't believe this would be true, since it's normal to me to have Linux systems stay up for months (even years) at a time. On the other hand, I also can't believe that my nearly-idle system is holding over a million files open. Does anyone have any ideas, either for fixes or further diagnosis? I could, of course, just reboot the system, but I don't want this to be a recurring problem every few weeks. As a stopgap measure, I've quit Firefox, which was accounting for almost 2000 lines of lsof output (!) even though I only had one window open, and now I can run 'ls' again, but I doubt that will fix the problem for long. (edit: Oops, spoke too soon. By the time I finished typing out this question, the symptom was/is back) Thanks in advance for any help. And another update: My system was basically unusable, so I decided I had no option but to reboot. But before I did, I carefully quit one process at a time, checking /proc/sys/fs/file-nr after each termination. I found that, predictably, the number of open files gradually went down as I closed things down. Unfortunately, it wasn't a large effect. Yes, I was able to clear up 5000-10000 open files, but there were still over 1.2 million left. I shut down just about everything. All interactive shells, except for the one ssh I left open to finish closing down, httpd, even nfs service. Basically everything in the process table that wasn't a kernel process, and there were still an appalling number of files apparently left open. After the reboot, I found that /proc/sys/fs/file-nr showed about 2000 files open, which is much more reasonable. Starting up 2 Xvnc sessions as usual, along with the dozen or so monitoring windows I like to keep open, brought the total up to about 4000 files. I can see nothing wrong with that, of course, but I've obviously failed to identify the root cause. I'm still looking for ideas, since I definitely expect it to happen again. And another update, the next day: I watched the system carefully, and discovered that /proc/sys/fs/file-nr showed a growth of about 900 open files per hour. I shut down the system's only NFS client for the night, and the growth stopped. Mind you, it didn't free up the resources, but it did at least stop consuming more. Is this a known bug with NFS? I'll be bringing the NFS client back online today, and I'll narrow it down further. If anyone is familiar with this behavior, feel free to jump in with "Yeah, NFS4 has this problem, go back to NFS3" or something like that.

    Read the article

  • Including hibernate jar dependencies in ant build

    - by Patrick
    Hi, I'm trying to compile a runnable jar-file for a project that makes use of hibernate. I'm trying to construct an ant build.xml file to streamline my build process, but I'm having troubles with the inclusion of the hibernate3.jar inside the final jar-file. If I run the ant script I manage to include all my library jars, and they are put in the final jar-file's root. When I run the jar-file I get a java.lang.NoClassDefFoundError: org/hibernate/Session error. If I make use of the built-in export to jar in Eclipse, it works only if I choose "extract required libraries into jar". But that bloats the jar, and includes too much of my project (i.e. unit tests). Below is my generated manifest: Manifest-Version: 1.0 Main-Class: main.ServerImpl Class-Path: ./ antlr-2.7.6.jar commons-collections-3.1.jar dom4j-1.6.1.jar hibernate3.jar javassist-3.9.0.GA.jar jta-1.1.jar slf4j-api-1.5.11.jar slf4j-simple-1.5.11.jar mysql-connector-java-5.1.12-bin.jar rmiio-2.0.2.jar commons-logging-1.1.1.jar And the part of the build.xml looks like this: <target name="dist" depends="compile" description="Generates the Distribution Jar(s)"> <mkdir dir="${dist.dir}" /> <jar destfile="${dist.dir}/${dist.file.name}.jar" basedir="${build.prod.dir}" filesetmanifest="mergewithoutmain"> <manifest> <attribute name="Main-Class" value="${main.class}" /> <attribute name="Class-Path" value="./ ${manifest.classpath} " /> <attribute name="Implementation-Title" value="${app.name}" /> <attribute name="Implementation-Version" value="${app.version}" /> <attribute name="Implementation-Vendor" value="${app.vendor}" /> </manifest> <zipfileset refid="hibernatefiles" /> <zipfileset refid="slf4jfiles" /> <zipfileset refid="mysqlfiles" /> <zipfileset refid="commonsloggingfiles" /> <zipfileset refid="rmiiofiles" /> </jar> </target> The refids' for the zipfilesets point to the directories in a library directory lib in the root of the project. The manifest.classpath-variable takes the classpath of all those library jar-files, and flattens them with pathconvert and mapper. I've also tried to set the manifest classpath to ".", "./" and only the library jar, but to no difference at all. I'm hoping there's a simple remedy to my problems...

    Read the article

  • LINQ aggregate left join on SQL CE

    - by P Daddy
    What I need is such a simple, easy query, it blows me away how much work I've done just trying to do it in LINQ. In T-SQL, it would be: SELECT I.InvoiceID, I.CustomerID, I.Amount AS AmountInvoiced, I.Date AS InvoiceDate, ISNULL(SUM(P.Amount), 0) AS AmountPaid, I.Amount - ISNULL(SUM(P.Amount), 0) AS AmountDue FROM Invoices I LEFT JOIN Payments P ON I.InvoiceID = P.InvoiceID WHERE I.Date between @start and @end GROUP BY I.InvoiceID, I.CustomerID, I.Amount, I.Date ORDER BY AmountDue DESC The best equivalent LINQ expression I've come up with, took me much longer to do: var invoices = ( from I in Invoices where I.Date >= start && I.Date <= end join P in Payments on I.InvoiceID equals P.InvoiceID into payments select new{ I.InvoiceID, I.CustomerID, AmountInvoiced = I.Amount, InvoiceDate = I.Date, AmountPaid = ((decimal?)payments.Select(P=>P.Amount).Sum()).GetValueOrDefault(), AmountDue = I.Amount - ((decimal?)payments.Select(P=>P.Amount).Sum()).GetValueOrDefault() } ).OrderByDescending(row=>row.AmountDue); This gets an equivalent result set when run against SQL Server. Using a SQL CE database, however, changes things. The T-SQL stays almost the same. I only have to change ISNULL to COALESCE. Using the same LINQ expression, however, results in an error: There was an error parsing the query. [ Token line number = 4, Token line offset = 9,Token in error = SELECT ] So we look at the generated SQL code: SELECT [t3].[InvoiceID], [t3].[CustomerID], [t3].[Amount] AS [AmountInvoiced], [t3].[Date] AS [InvoiceDate], [t3].[value] AS [AmountPaid], [t3].[value2] AS [AmountDue] FROM ( SELECT [t0].[InvoiceID], [t0].[CustomerID], [t0].[Amount], [t0].[Date], COALESCE(( SELECT SUM([t1].[Amount]) FROM [Payments] AS [t1] WHERE [t0].[InvoiceID] = [t1].[InvoiceID] ),0) AS [value], [t0].[Amount] - (COALESCE(( SELECT SUM([t2].[Amount]) FROM [Payments] AS [t2] WHERE [t0].[InvoiceID] = [t2].[InvoiceID] ),0)) AS [value2] FROM [Invoices] AS [t0] ) AS [t3] WHERE ([t3].[Date] >= @p0) AND ([t3].[Date] <= @p1) ORDER BY [t3].[value2] DESC Ugh! Okay, so it's ugly and inefficient when run against SQL Server, but we're not supposed to care, since it's supposed to be quicker to write, and the performance difference shouldn't be that large. But it just doesn't work against SQL CE, which apparently doesn't support subqueries within the SELECT list. In fact, I've tried several different left join queries in LINQ, and they all seem to have the same problem. Even: from I in Invoices join P in Payments on I.InvoiceID equals P.InvoiceID into payments select new{I, payments} generates: SELECT [t0].[InvoiceID], [t0].[CustomerID], [t0].[Amount], [t0].[Date], [t1].[InvoiceID] AS [InvoiceID2], [t1].[Amount] AS [Amount2], [t1].[Date] AS [Date2], ( SELECT COUNT(*) FROM [Payments] AS [t2] WHERE [t0].[InvoiceID] = [t2].[InvoiceID] ) AS [value] FROM [Invoices] AS [t0] LEFT OUTER JOIN [Payments] AS [t1] ON [t0].[InvoiceID] = [t1].[InvoiceID] ORDER BY [t0].[InvoiceID] which also results in the error: There was an error parsing the query. [ Token line number = 2, Token line offset = 5,Token in error = SELECT ] So how can I do a simple left join on a SQL CE database using LINQ? Am I wasting my time?

    Read the article

  • Weighted round robins via TTL - possible?

    - by Joe Hopfgartner
    I currently use DNS round robin for load balancing, which works great. The records look like this (I have a ttl of 120 seconds) ;; ANSWER SECTION: orion.2x.to. 116 IN A 80.237.201.41 orion.2x.to. 116 IN A 87.230.54.12 orion.2x.to. 116 IN A 87.230.100.10 orion.2x.to. 116 IN A 87.230.51.65 I learned that not every ISP / device treats such a response the same way. For example some DNS servers rotate the addresses randomly or always cycle them through. Some just propagate the first entry, others try to determine which is best (regionally near) by looking at the ip address. However if the userbase is big enough (spreads over multiple ISPs etc) it balances pretty well. The discrepancies from highest to lowest loaded server hardly every exceeds 15%. However now I have the problem that I am introducing more servers into the systems, that not all have the same capacities. I currently only have 1gbps servers, but I want to work with 100mbit and also 10gbps servers too. So what I want is I want to introduce a server with 10 GBps with a weight of 100, a 1 gbps server with a weight of 10 and a 100 mbit server with a weight of 1. I used to add servers twice to bring more traffic to them (which worked nice. the bandwidth doubled almost.) But adding a 10gbit server 100 times to DNS is a bit rediculous. So I thought about using the TTL. If I give server A 240 seconds ttl and server B only 120 seconds (which is about about the minimum to use for round robin, as a lot of dns servers set to 120 if a lower ttl is specified.. so i have heard) I think something like this should occour in an ideal scenario: first 120 seconds 50% of requests get server A -> keep it for 240 seconds. 50% of requests get server B -> keep it for 120 seconds second 120 seconds 50% of requests still have server A cached -> keep it for another 120 seconds. 25% of requests get server A -> keep it for 240 seconds 25% of requests get server B -> keep it for 120 seconds third 120 seconds 25% will get server A (from the 50% of Server A that now expired) -> cache 240 sec 25% will get server B (from the 50% of Server A that now expired) -> cache 120 sec 25% will have server A cached for another 120 seconds 12.5% will get server B (from the 25% of server B that now expired) -> cache 120sec 12.5% will get server A (from the 25% of server B that now expired) -> cache 240 sec fourth 120 seconds 25% will have server A cached -> cache for another 120 secs 12.5% will get server A (from the 25% of b that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of b that now expired) -> cache 120 secs 12.5% will get server A (from the 25% of a that now expired) -> cache 240 secs 12.5% will get server B (from the 25% of a that now expired) -> cache 120 secs 6.25% will get server A (from the 12.5% of b that now expired) -> cache 240 secs 6.25% will get server B (from the 12.5% of b that now expired) -> cache 120 secs 12.5% will have server A cached -> cache another 120 secs ... i think i lost something at this point but i think you get the idea.... As you can see this gets pretty complicated to predict and it will for sure not work out like this in practice. But it should definitely have an effect on the distribution! I know that weighted round robin exists and is just controlled by the root server. It just cycles through dns records when responding and returns dns records with a set propability that corresponds to the weighting. My DNS server does not support this, and my requirements are not that precise. If it doesnt weight perfectly its okay, but it should go into the right direction. I think using the TTL field could be a more elegant and easier solution - and it deosnt require a dns server that controls this dynamically, which saves resources - which is in my opinion the whole point of dns load balancing vs hardware load balancers. My question now is... are there any best prectices / methos / rules of thumb to weight round robin distribution using the TTL attribute of DNS records? Edit: The system is a forward proxy server system. The amount of Bandwidth (not requests) exceeds what one single server with ethernet can handle. So I need a balancing solution that distributes the bandwidth to several servers. Are there any alternative methods than using DNS? Of course I can use a load balancer with fibre channel etc, but the costs are rediciulous and it also increases only the width of the bottleneck and does not eliminate it. The only thing i can think of are anycast (is it anycast or multicast?) ip addresses, but I don't have the means to set up such a system.

    Read the article

  • Purpose of Explicit Default Constructors

    - by Dennis Zickefoose
    I recently noticed a class in C++0x that calls for an explicit default constructor. However, I'm failing to come up with a scenario in which a default constructor can be called implicitly. It seems like a rather pointless specifier. I thought maybe it would disallow Class c; in favor of Class c = Class(); but that does not appear to be the case. Some relevant quotes from the C++0x FCD, since it is easier for me to navigate [similar text exists in C++03, if not in the same places] 12.3.1.3 [class.conv.ctor] A default constructor may be an explicit constructor; such a constructor will be used to perform default-initialization or value initialization (8.5). It goes on to provide an example of an explicit default constructor, but it simply mimics the example I provided above. 8.5.6 [decl.init] To default-initialize an object of type T means: — if T is a (possibly cv-qualified) class type (Clause 9), the default constructor for T is called (and the initialization is ill-formed if T has no accessible default constructor); 8.5.7 [decl.init] To value-initialize an object of type T means: — if T is a (possibly cv-qualified) class type (Clause 9) with a user-provided constructor (12.1), then the default constructor for T is called (and the initialization is ill-formed if T has no accessible default constructor); In both cases, the standard calls for the default constructor to be called. But that is what would happen if the default constructor were non-explicit. For completeness sake: 8.5.11 [decl.init] If no initializer is specified for an object, the object is default-initialized; From what I can tell, this just leaves conversion from no data. Which doesn't make sense. The best I can come up with would be the following: void function(Class c); int main() { function(); //implicitly convert from no parameter to a single parameter } But obviously that isn't the way C++ handles default arguments. What else is there that would make explicit Class(); behave differently from Class();? The specific example that generated this question was std::function [20.8.14.2 func.wrap.func]. It requires several converting constructors, none of which are marked explicit, but the default constructor is.

    Read the article

  • Error while rendering .rdl file into pdf format

    - by Arka Chatterjee
    Hi, I an generating reports using SQL Server reporting services. I have generated a report and have put .rdl report file in the "E" drive. Now, when I am going to render the .rdl report file into pdf format,I am getting the exception : - "An error occurred during local report processing." The stack trace is follows : - " at Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, CreateAndRegisterStream createStreamCallback, Warning[]& warnings)\r\n at Microsoft.Reporting.WebForms.LocalReport.InternalRender(String format, Boolean allowInternalRenderers, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings)\r\n at Microsoft.Reporting.WebForms.LocalReport.Render(String format, String deviceInfo, String& mimeType, String& encoding, String& fileNameExtension, String[]& streams, Warning[]& warnings)\r\n at SaltlakeSoft.APEX2.Controllers.TestPageController.RenderReport() in E:\Documents and Settings\Administrator\Desktop\afetbuild15thmayapex2\apex2\Controllers\TestPageController.cs:line 1626\r\n at lambda_method(ExecutionScope , ControllerBase , Object[] )\r\n at System.Web.Mvc.ActionMethodDispatcher.<c_DisplayClass1.b_0(ControllerBase controller, Object[] parameters)\r\n at System.Web.Mvc.ActionMethodDispatcher.Execute(ControllerBase controller, Object[] parameters)\r\n at System.Web.Mvc.ReflectedActionDescriptor.Execute(ControllerContext controllerContext, IDictionary2 parameters)\r\n at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethod(ControllerContext controllerContext, ActionDescriptor actionDescriptor, IDictionary2 parameters)\r\n at System.Web.Mvc.ControllerActionInvoker.<c_DisplayClassa.b_7()\r\n at System.Web.Mvc.ControllerActionInvoker.InvokeActionMethodFilter(IActionFilter filter, ActionExecutingContext preContext, Func`1 continuation)" I am using the following code : - LocalReport report = new LocalReport(); report.ReportPath = @"E:\Report1.rdl"; List employeeCollection = empRepository.FindAll().ToList(); ReportDataSource reportDataSource = new ReportDataSource("dataSource1",employeeCollection); report.DataSources.Clear(); report.DataSources.Add(reportDataSource); report.Refresh(); string reportType = "PDF"; string mimeType; string encoding; string fileNameExtension; string deviceInfo ="" +"PDF" + "8.5in" + "11in" + "0.5in" +"1in" + "1in" +"0.5in" + ""; Warning[] warnings; string[] streams; byte[] renderedBytes; renderedBytes = report.Render(reportType,deviceInfo,out mimeType,out encoding, out fileNameExtension, out streams, out warnings); Response.Clear(); Response.ContentType = mimeType; Response.AddHeader("content-disposition", "attachment; filename=foo." + fileNameExtension); Response.BinaryWrite(renderedBytes); Response.End(); Please help me. Thanks in advance- Arka

    Read the article

  • Problem getting correct parameters for C# P/Invoke call to C++ dll

    - by Jim Jones
    Trying to Interop a functionality from the Outside In API from Oracle. Have the following function: SCCERR EXOpenExport {VTHDOC hDoc, VTDWORD dwOutputId, VTDWORD dwSpecType, VTLPVOID pSpec, VTDWORD dwFlags, VTSYSPARAM dwReserved, VTLPVOID pCallbackFunc, VTSYSPARAM dwCallbackData, VTLPHEXPORT phExport); From the header files I reduced the parameters to: typedef VTSYSPARAM VTHDOC, VTLPHDOC * typedef DWORD_PTR VTSYSPARAM typedef unsigned long DWORD_PTR typedef unsigned long VTDWORD typedef VTVOID* VTLPVOID #define VTVOID void typedef VTHDOC VTHEXPORT, *VTLPEXPORT These are for 32 bit windows Going through the header files, the example programs, and the documentation I found: 1. That pSpec could be a pointer to a buffer or NULL, so I set it to a IntPtr.Zero (documentation). 2. That dwFlags and dwReserved according to the documentation "Must be set by the developer to 0". 3. That pCallbackFunc can be set to NULL if I don't want to handle callbacks. 4. That the last two are based on structs that I wrote C# wrappers for using the [StructLayout(LayoutKind.Sequential)]. Then instatiated an instance and generated the parameters by first creating a IntPtr with Marshal.AllocHGlobal(Marshal.SizeOf(instance)), then getting the address value which is passed as a uint for dwCallbackData and a IntPtr for phExport. The final parameter list is as follows: 1. phDoc as a IntPtr which was loaded with an address by the DAOpenDocument function called before. 2. dwOutputId as uint set to 1535 which represents FI_JPEGFIF 3. dwSpecType as int set to 2 which represents IOTYPE_ANSIPATH 4. pSpec as an IntPtr.Zero where the output will be written 5. dwFlags as uint set to 0 as directed 6. dwReserved as uint set to 0 as directed 7. pCallbackFunc as IntPtr set to NULL as I will handle results 8. dwCallBackDate as uint the address of a buffer for a struct 9. phExport as IntPtr to another struct buffer still get an undefined error from the API. Meaning that the call returns a 961 which is not defined in any of the header files. In the past I have gotten this when my choice of parameter types are incorrect. I started out using Interop Assistant which was helpful in learning how many of the parameter types get translated. It is however limited by how well I am able to glean the correct native type from the header files. For example the hDoc parameter used in the preceding function was defined as a non-filesytem handle, so attempted to use Marshal to create a handle, then used an IntPtr, and finally it turned out to be an int (actually it was &phDoc used here). So is there a more scientific way of doing this, other than trial and error? Jim

    Read the article

  • Hibernate + Spring : cascade deletion ignoring non-nullable constraints

    - by E.Benoît
    Hello, I seem to be having one weird problem with some Hibernate data classes. In a very specific case, deleting an object should fail due to existing, non-nullable relations - however it does not. The strangest part is that a few other classes related to the same definition behave appropriately. I'm using HSQLDB 1.8.0.10, Hibernate 3.5.0 (final) and Spring 3.0.2. The Hibernate properties are set so that batch updates are disabled. The class whose instances are being deleted is: @Entity( name = "users.Credentials" ) @Table( name = "credentials" , schema = "users" ) public class Credentials extends ModelBase { private static final long serialVersionUID = 1L; /* Some basic fields here */ /** Administrator credentials, if any */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public AdminCredentials adminCredentials; /** Active account data */ @OneToOne( mappedBy = "credentials" , fetch = FetchType.LAZY ) public Account activeAccount; /* Some more reverse relations here */ } (ModelBase is a class that simply declares a Long field named "id" as being automatically generated) The Account class, which is one for which constraints work, looks like this: @Entity( name = "users.Account" ) @Table( name = "accounts" , schema = "users" ) public class Account extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials the account is linked to */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } And here is the AdminCredentials class, for which the constraints are ignored. @Entity( name = "admin.Credentials" ) @Table( name = "admin_credentials" , schema = "admin" ) public class AdminCredentials extends ModelBase { private static final long serialVersionUID = 1L; /** Credentials linked with an administrative account */ @OneToOne( optional = false ) @JoinColumn( name = "credentials_id" , referencedColumnName = "id" , nullable = false , updatable = false ) public Credentials credentials; /* Some more fields here */ } The code that attempts to delete the Credentials instances is: try { if ( account.validationKey != null ) { this.hTemplate.delete( account.validationKey ); } this.hTemplate.delete( account.languageSetting ); this.hTemplate.delete( account ); } catch ( DataIntegrityViolationException e ) { return false; } Where hTemplate is a HibernateTemplate instance provided by Spring, its flush mode having been set to EAGER. In the conditions shown above, the deletion will fail if there is an Account instance that refers to the Credentials instance being deleted, which is the expected behaviour. However, an AdminCredentials instance will be ignored, the deletion will succeed, leaving an invalid AdminCredentials instance behind (trying to refresh that instance causes an error because the Credentials instance no longer exists). I have tried moving the AdminCredentials table from the admin DB schema to the users DB schema. Strangely enough, a deletion-related error is then triggered, but not in the deletion code - it is triggered at the next query involving the table, seemingly ignoring the flush mode setting. I've been trying to understand this for hours and I must admit I'm just as clueless now as I was then.

    Read the article

  • Unable to execute stored Procedure using Java and JDBC

    - by jwmajors81
    I have been trying to execute a MS SQL Server stored procedure via JDBC today and have been unsuccessful thus far. The stored procedure has 1 input and 1 output parameter. With every combination I use when setting up the stored procedure call in code I get an error stating that the stored procedure couldn't be found. I have provided the stored procedure I'm executing below (NOTE: this is vendor code, so I cannot change it). set ANSI_NULLS ON set QUOTED_IDENTIFIER ON GO ALTER PROC [dbo].[spWCoTaskIdGen] @OutIdentifier int OUTPUT AS BEGIN DECLARE @HoldPolicyId int DECLARE @PolicyId char(14) IF NOT EXISTS ( SELECT * FROM UniqueIdentifierGen (UPDLOCK) ) INSERT INTO UniqueIdentifierGen VALUES (0) UPDATE UniqueIdentifierGen SET CurIdentifier = CurIdentifier + 1 SELECT @OutIdentifier = (SELECT CurIdentifier FROM UniqueIdentifierGen) END The code looks like: CallableStatement statement = connection .prepareCall("{call dbo.spWCoTaskIdGen(?)}"); statement.setInt(1, 0); ResultSet result = statement.executeQuery(); I get the following error: SEVERE: Could not find stored procedure 'dbo.spWCoTaskIdGen'. I have also tried CallableStatement statement = connection .prepareCall("{? = call dbo.spWCoTaskIdGen(?)}"); statement.registerOutParameter(1, java.sql.Types.INTEGER); statement.registerOutParameter(2, java.sql.Types.INTEGER); statement.executeQuery(); The above results in: SEVERE: Could not find stored procedure 'dbo.spWCoTaskIdGen'. I have also tried: CallableStatement statement = connection .prepareCall("{? = call spWCoTaskIdGen(?)}"); statement.registerOutParameter(1, java.sql.Types.INTEGER); statement.registerOutParameter(2, java.sql.Types.INTEGER); statement.executeQuery(); The code above resulted in the following error: Could not find stored procedure 'spWCoTaskIdGen'. Finally, I should also point out the following: I have used the MS SQL Server Management Studio tool and have been able to successfully run the stored procedure. The sql generated to execute the stored procedure is provided below: GO DECLARE @return_value int, @OutIdentifier int EXEC @return_value = [dbo].[spWCoTaskIdGen] @OutIdentifier = @OutIdentifier OUTPUT SELECT @OutIdentifier as N'@OutIdentifier ' SELECT 'Return Value' = @return_value GO The code being executed runs with the same user id that was used in point #1 above. In the code that creates the Connection object I log which database I'm connecting to and the code is connecting to the correct database. Any ideas? Thank you very much in advance.

    Read the article

  • Binding CoreData Managed Object to NSTextFieldCell subclass

    - by ndg
    I have an NSTableView which has its first column set to contain a custom NSTextFieldCell. My custom NSTextFieldCell needs to allow the user to edit a "desc" property within my Managed Object but to also display an "info" string that it contains (which is not editable). To achieve this, I followed this tutorial. In a nutshell, the tutorial suggests editing your Managed Objects generated subclass to create and pass a dictionary of its contents to your NSTableColumn via bindings. This works well for read-only NSCell implementations, but I'm looking to subclass NSTextFieldCell to allow the user to edit the "desc" property of my Managed Object. To do this, I followed one of the articles comments, which suggests subclassing NSFormatter to explicitly state which Managed Object property you would like the NSTextFieldCell to edit. Here's the suggested implementation: @implementation TRTableDescFormatter - (BOOL)getObjectValue:(id *)anObject forString:(NSString *)string errorDescription:(NSString **)error { if (anObject != nil){ *anObject = [NSDictionary dictionaryWithObject:string forKey:@"desc"]; return YES; } return NO; } - (NSString *)stringForObjectValue:(id)anObject { if (![anObject isKindOfClass:[NSDictionary class]]) return nil; return [anObject valueForKey:@"desc"]; } - (NSAttributedString*)attributedStringForObjectValue:(id)anObject withDefaultAttributes:(NSDictionary *)attrs { if (![anObject isKindOfClass:[NSDictionary class]]) return nil; NSAttributedString *anAttributedString = [[NSAttributedString alloc] initWithString: [anObject valueForKey:@"desc"]]; return anAttributedString; } @end I assign the NSFormatter subclass to my cell in my NSTextFieldCell subclass, like so: - (void)awakeFromNib { TRTableDescFormatter *formatter = [[[TRTableDescFormatter alloc] init] autorelease]; [self setFormatter:formatter]; } This seems to work, but falls down when editing multiple rows. The behaviour I'm seeing is that editing a row will work as expected until you try to edit another row. Upon editing another row, all previously edited rows will have their "desc" value set to the value of the currently selected row. I've been doing a lot of reading on this subject and would really like to get to the bottom of this. What's more frustrating is that my NSTextFieldCell is rendering exactly how I would like it to. This editing issue is my last obstacle! If anyone can help, that would be greatly appreciated.

    Read the article

  • How to delete multiple files with msbuild/web deployment project?

    - by Alex
    I have an odd issue with how msbuild is behaving with a VS2008 Web Deployment Project and would like to know why it seems to randomly misbehave. I need to remove a number of files from a deployment folder that should only exist in my development environment. The files have been generated by the web application during dev/testing and are not included in my Visual Studio project/solution. The configuration I am using is as follows: <!-- Partial extract from Microsoft Visual Studio 2008 Web Deployment Project --> <ItemGroup> <DeleteAfterBuild Include="$(OutputPath)data\errors\*.xml" /> <!-- Folder 1: 36 files --> <DeleteAfterBuild Include="$(OutputPath)data\logos\*.*" /> <!-- Folder 2: 2 files --> <DeleteAfterBuild Include="$(OutputPath)banners\*.*" /> <!-- Folder 3: 1 file --> </ItemGroup> <Target Name="AfterBuild"> <Message Text="------ AfterBuild process starting ------" Importance="high" /> <Delete Files="@(DeleteAfterBuild)"> <Output TaskParameter="DeletedFiles" PropertyName="deleted" /> </Delete> <Message Text="DELETED FILES: $(deleted)" Importance="high" /> <Message Text="------ AfterBuild process complete ------" Importance="high" /> </Target> The problem I have is that when I do a build/rebuild of the Web Deployment Project it "sometimes" removes all the files but other times it will not remove anything! Or it will remove only one or two of the three folders in the DeleteAfterBuild item group. There seems to be no consistency in when the build process decides to remove the files or not. When I've edited the configuration to include only Folder 1 (for example), it removes all the files correctly. Then adding Folder 2 and 3, it starts removing all the files as I want. Then, seeming at random times, I'll rebuild the project and it won't remove any of the files! I have tried moving these items to the ExcludeFromBuild item group (which is probably where it should be) but it gives me the same unpredictable result. Has anyone experienced this? Am I doing something wrong? Why does this happen?

    Read the article

  • javascript problems when generating html reports from within java

    - by posdef
    Hi, I have been working on a Java project in which the reports will be generated in HTML, so I am implementing methods for creating these reports. One important functionality is to be able to have as much info as possible in the tables, but still not clutter too much. In other words the details should be available if the user wishes to take a look at them but not necessarily visible by default. I have done some searching and testing and found an interesting template for hiding/showing content with the use of CSS and javascript, the problem is that when I try the resultant html page the scripts dont work. I am not sure if it's due a problem in Java or in the javascript itself. I have compared the html code that java produces to the source where I found the template, they seem to match pretty well. Below are bits of my java code that generates the javascript and the content, i would greatly appreciate if anyone can point out the possible reasons for this problem: //goes to head private void addShowHideScript() throws IOException{ StringBuilder sb = new StringBuilder(); sb.append("<script type=\"text/javascript\" language=\"JavaScript\">\n"); sb.append("<!--function HideContent(d) {\n"); sb.append("document.getElementById(d).style.display=\"none\";}\n"); sb.append("function ShowContent(d) {\n"); sb.append("document.getElementById(d).style.display=\"block\";}\n"); sb.append("function ReverseDisplay(d) {\n"); sb.append("if(document.getElementById(d).style.display==\"none\")\n"); sb.append("{ document.getElementById(d).style.display=\"block\"; }\n"); sb.append("else { document.getElementById(d).style.display=\"none\"; }\n}\n"); sb.append("//--></script>\n"); out.write(sb.toString()); out.newLine(); } // body private String linkShowHideContent(String pathname, String divname){ StringBuilder sb = new StringBuilder(); sb.append("<a href=\"javascript:ReverseDisplay('"); sb.append(divname); sb.append("')\">"); sb.append(pathname); sb.append("</a>"); return sb.toString(); } // content out.write(linkShowHideContent("hidden content", "ex")); out.write("<div id=\"ex\" style=\"display:block;\">"); out.write("<p>Content goes here.</p></div>");

    Read the article

  • How can I make Swig correctly wrap a char* buffer that is modified in C as a Java Something-or-other

    - by Ukko
    I am trying to wrap some legacy code for use in Java and I was quite happy to see that Swig was able to handle the header file and it generate a great wrapper that almost works. Now I am looking for the deep magic that will make it really work. In C I have a function that looks like this DLL_IMPORT int DustyVoodoo(char *buff, int len, char *curse); This integer returned by this function is an error code in case it fails. The arguments are buff is a character buffer len is the length of the data in the buffer curse the another character buffer that contains the result of calling DustyVoodoo So, you can see where this is going, the result is actually coming back via the third argument. Also len is confusing since it may be the length of both buffers, they are always allocated as being the same size in calling code but given what DustyVoodoo does I don't think that they need be the same. To be safe both buffers should be the same size in practice, say 512 chars. The C code generated for the binding is as follows: SWIGEXPORT jint JNICALL Java_pemapiJNI_DustyVoodoo(JNIEnv *jenv, jclass jcls, jstring jarg1, jint jarg2, jstring jarg3) { jint jresult = 0 ; char *arg1 = (char *) 0 ; int arg2 ; char *arg3 = (char *) 0 ; int result; (void)jenv; (void)jcls; arg1 = 0; if (jarg1) { arg1 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg1, 0); if (!arg1) return 0; } arg2 = (int)jarg2; arg3 = 0; if (jarg3) { arg3 = (char *)(*jenv)->GetStringUTFChars(jenv, jarg3, 0); if (!arg3) return 0; } result = (int)PemnEncrypt(arg1,arg2,arg3); jresult = (jint)result; if (arg1) (*jenv)->ReleaseStringUTFChars(jenv, jarg1, (const char *)arg1); if (arg3) (*jenv)->ReleaseStringUTFChars(jenv, jarg3, (const char *)arg3); return jresult; } It is correct for what it does; however, it misses the fact that cursed is not just an input, it is altered by the function and should be returned as an output. It also does not know that the java Strings are really buffers and should be backed by a suitably sized array. I think that Swig can do the right thing here, I just can't figure out from the documentation how to tell Swig what it needs to know. Any typemap masers in the house?

    Read the article

  • How to designing a generic databse whos layout may change over time?

    - by mawg
    Here's a tricky one - how do I programatically create and interrogate a database who's contents I can't really foresee? I am implementing a generic input form system. The user can create PHP forms with a WYSIWYG layout and use them for any purpose he wishes. He can also query the input. So, we have three stages: a form is designed and generated. This is a one-off procedure, although the form can be edited later. This designs the database. someone or several people make use of the form - say for daily sales reports, stock keeping, payroll, etc. Their input to the forms is written to the database. others, maybe management, can query the database and generate reports. Since these forms are generic, I can't predict the database structure - other than to say that it will reflect HTML form fields and consist of a the data input from collection of edit boxes, memos, radio buttons and the like. Questions and remarks: A) how can I best structure the database, in terms of tables and columns? What about primary keys? My first thought was to use the control name to identify each column, then I realized that the user can edit the form and rename, so that maybe "name" becomes "employee" or "wages" becomes ":salary". I am leaning towards a unique number for each. B) how best to key the rows? I was thinking of a timestamp to allow me to query and a column for the row Id from A) C) I have to handle column rename/insert/delete. Foe deletion, I am unsure whether to delete the data from the database. Even if the user is not inputting it from the form any more he may wish to query what was previously entered. Or there may be some legal requirements to retain the data. Any gotchas in column rename/insert/delete? D) For the querying, I can have my PHP interrogate the database to get column names and generate a form with a list where each entry has a database column name, a checkbox to say if it should be used in the query and, based on column type, some selection criteria. That ought to be enough to build searches like "position = 'senior salesman' and salary 50k". E) I probably have to generate some fancy charts - graphs, histograms, pie charts, etc for query results of numerical data over time. I need to find some good FOSS PHP for this. F) What else have I forgotten? This all seems very tricky to me, but I am database n00b - maybe it is simple to you gurus?

    Read the article

  • Sharepoint web part stops working because of Resources.en-US.resx file

    - by Eric C
    I've been developing a Sharepoint web part, which had been working fine upon deployment. The web part has been developed with WSP Builder, packaged up and then deployed via stsadm. The web part has been deployed tens, if not a hundred times to the dev box with no problems. Now, the web part throws an error which breaks the page it's on: Object reference not set to an instance of an object. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.NullReferenceException: Object reference not set to an instance of an object. Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [NullReferenceException: Object reference not set to an instance of an object.] NYCIRB.DMS.WebParts.SearchUpload.SearchUpload.HandleException(Exception ex) +62 NYCIRB.DMS.WebParts.SearchUpload.SearchUpload.OnLoad(EventArgs e) +214 System.Web.UI.Control.LoadRecursive() +50 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Control.LoadRecursive() +141 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +627 When looking through my Sharepoint logs, I find these errors repeated over and over which correspond to the time the web part was attempted to be loaded: 01/19/2009 10:53:14.43 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 72kg High (#2: Cannot open "Resources.en-US.resx": no such file or folder.) 01/19/2009 10:53:14.43 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8e26 Medium Failed to open the language resource for Fea367b94a9-4a15-42ba-b4a2-32420363e018 keyfile Resources. 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8e25 Medium Failed to look up string with key "XomlUrl", keyfile core. 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8l3c Medium Localized resource for token 'XomlUrl' could not be found for file with path: "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Template\Features\Fields\fieldswss.xml". 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8e25 Medium Failed to look up string with key "RulesUrl", keyfile core. 01/19/2009 10:53:17.55 w3wp.exe (0x05E0) 0x00FC Windows SharePoint Services General 8l3c Medium Localized resource for token 'RulesUrl' could not be found for file with path: "C:\Program Files\Common Files\Microsoft Shared\Web Server Extensions\12\Template\Features\Fields\fieldswss.xml". I've retracted the web part manually through Solution Management, retracted through stsadm, checked for the existence of the resource file, which is nowhere to be found. I'm pretty much at a loss to why this happened or how to resolve it.

    Read the article

  • ASP.NET application developed in 32 bit environment not working in 64 bit environment

    - by jgonchik
    We have developed an ASP.NET website on a Windows 7 - 32 bit platform using Visual Studio 2008. This website is being hosted at a hosting company where we share a server with hundreds of other ASP.NET websites. We are in the process of changing our hosting to a dedicated Windows 2008 - 64 bit server. We have installed Visual Studio on this new server in order to debug our application. If we try to start the application on this new server using Visual Studios 2008's own web server (not IIS 7) we get the error below. We have tried to compile the application in both 32 as well as 64 bit mode. We also tried to compile to "Any CPU". But nothing helps. We also tried running Visual Studio as an administrator but without success. We get the following error: Server Error in '/' Application. The specified module could not be found. (Exception from HRESULT: 0x8007007E) Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.IO.FileNotFoundException: The specified module could not be found. (Exception from HRESULT: 0x8007007E) Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [FileNotFoundException: The specified module could not be found. (Exception from HRESULT: 0x8007007E)] System.Reflection.Assembly._nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +0 System.Reflection.Assembly.nLoad(AssemblyName fileName, String codeBase, Evidence assemblySecurity, Assembly locationHint, StackCrawlMark& stackMark, Boolean throwOnFileNotFound, Boolean forIntrospection) +43 System.Reflection.Assembly.InternalLoad(AssemblyName assemblyRef, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +127 System.Reflection.Assembly.InternalLoad(String assemblyString, Evidence assemblySecurity, StackCrawlMark& stackMark, Boolean forIntrospection) +142 System.Reflection.Assembly.Load(String assemblyString) +28 System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +46 [ConfigurationErrorsException: The specified module could not be found. (Exception from HRESULT: 0x8007007E)] System.Web.Configuration.CompilationSection.LoadAssemblyHelper(String assemblyName, Boolean starDirective) +613 System.Web.Configuration.CompilationSection.LoadAllAssembliesFromAppDomainBinDirectory() +203 System.Web.Configuration.CompilationSection.LoadAssembly(AssemblyInfo ai) +105 System.Web.Compilation.BuildManager.GetReferencedAssemblies(CompilationSection compConfig) +178 System.Web.Compilation.BuildProvidersCompiler..ctor(VirtualPath configPath, Boolean supportLocalization, String outputAssemblyName) +54 System.Web.Compilation.ApplicationBuildProvider.GetGlobalAsaxBuildResult(Boolean isPrecompiledApp) +232 System.Web.Compilation.BuildManager.CompileGlobalAsax() +51 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +337 [HttpException (0x80004005): The specified module could not be found. (Exception from HRESULT: 0x8007007E)] System.Web.Compilation.BuildManager.ReportTopLevelCompilationException() +58 System.Web.Compilation.BuildManager.EnsureTopLevelFilesCompiled() +512 System.Web.Hosting.HostingEnvironment.Initialize(ApplicationManager appManager, IApplicationHost appHost, IConfigMapPathFactory configMapPathFactory, HostingEnvironmentParameters hostingParameters) +729 [HttpException (0x80004005): The specified module could not be found. (Exception from HRESULT: 0x8007007E)] System.Web.HttpRuntime.FirstRequestInit(HttpContext context) +8897659 System.Web.HttpRuntime.EnsureFirstRequestInit(HttpContext context) +85 System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr) +259 Does anyone know why this error appears and how to solve it?

    Read the article

  • Can anyone explain me the source code of python "import this"?

    - by byterussian
    If you open a Python interpreter, and type "import this", as you know, it prints: The Zen of Python, by Tim Peters Beautiful is better than ugly. Explicit is better than implicit. Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. Readability counts. Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. There should be one-- and preferably only one --obvious way to do it. Although that way may not be obvious at first unless you're Dutch. Now is better than never. Although never is often better than *right* now. If the implementation is hard to explain, it's a bad idea. If the implementation is easy to explain, it may be a good idea. Namespaces are one honking great idea -- let's do more of those! In the python source(Lib/this.py) this text is generated by a curios piece of code: s = """Gur Mra bs Clguba, ol Gvz Crgref Ornhgvshy vf orggre guna htyl. Rkcyvpvg vf orggre guna vzcyvpvg. Fvzcyr vf orggre guna pbzcyrk. Pbzcyrk vf orggre guna pbzcyvpngrq. Syng vf orggre guna arfgrq. Fcnefr vf orggre guna qrafr. Ernqnovyvgl pbhagf. Fcrpvny pnfrf nera'g fcrpvny rabhtu gb oernx gur ehyrf. Nygubhtu cenpgvpnyvgl orngf chevgl. Reebef fubhyq arire cnff fvyragyl. Hayrff rkcyvpvgyl fvyraprq. Va gur snpr bs nzovthvgl, ershfr gur grzcgngvba gb thrff. Gurer fubhyq or bar-- naq cersrenoyl bayl bar --boivbhf jnl gb qb vg. Nygubhtu gung jnl znl abg or boivbhf ng svefg hayrff lbh'er Qhgpu. Abj vf orggre guna arire. Nygubhtu arire vf bsgra orggre guna *evtug* abj. Vs gur vzcyrzragngvba vf uneq gb rkcynva, vg'f n onq vqrn. Vs gur vzcyrzragngvba vf rnfl gb rkcynva, vg znl or n tbbq vqrn. Anzrfcnprf ner bar ubaxvat terng vqrn -- yrg'f qb zber bs gubfr!""" d = {} for c in (65, 97): for i in range(26): d[chr(i+c)] = chr((i+13) % 26 + c) print "".join([d.get(c, c) for c in s])

    Read the article

  • Vim + OmniCppComplete: Completing on Class Members which are STL containers

    - by Robert S. Barnes
    Completion on class members which are STL containers is failing. Completion on local objects which are STL containers works fine. For example, given the following files: // foo.h #include <string> class foo { public: void set_str(const std::string &); std::string get_str_reverse( void ); private: std::string str; }; // foo.cpp #include "foo.h" using std::string; string foo::get_str_reverse ( void ) { string temp; temp.assign(str); reverse(temp.begin(), temp.end()); return temp; } /* ----- end of method foo::get_str ----- */ void foo::set_str ( const string &s ) { str.assign(s); } /* ----- end of method foo::set_str ----- */ I've generated the tags for these two files using: ctags -R --c++-kinds=+pl --fields=+iaS --extra=+q . When I type temp. in the cpp I get a list of string member functions as expected. But if I type str. omnicppcomplete spits out "Pattern Not Found". I've noticed that the temp. completion only works if I have the using std::string; declaration. How do I get completion to work on my class members which are STL containers? Edit I found that completion on members which are STL containers works if I make the follow modifications to the header: // foo.h #include <string> using std::string; class foo { public: void set_str(const string &); string get_str_reverse( void ); private: string str; }; Basically, if I add using std::string; and then remove the std:: name space qualifier from the string str; member and regenerate the tags file then OmniCppComplete is able to do completion on str.. It doesn't seem to matter whether or not I have let OmniCpp_DefaultNamespaces = ["std", "_GLIBCXX_STD"] set in the .vimrc. The problem is that putting using declarations in header files seems like a big no-no, so I'm back to square one.

    Read the article

  • Why facelets ignore href-attribute of a link when I use <a href="url" jsfc="h:outputLink"> ?

    - by Roman
    I have next facelet composition: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:ui="http://java.sun.com/jsf/facelets" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core"> <body> <ui:composition> <ul id="navigation"> <li> <a href="http://google.com" id="google1" jsfc="h:outputLink">google.com</a> </li> <li> <h:outputLink id="google2" value="http://google.com"> <h:outputText id="outputtext" value="google.com"/> </h:outputLink> </li> </ul> </ui:composition> </body> </html> There must be a mistake because what I expected to see is almost the same final html-markup. But actually here is what facelets generated: <ul id="navigation"> <li><a id="google1" name="google1" href="">google.com</a></li> <li><a id="google2" name="google2" href="http://google.com"><span id="outputtext">google.com</span></a> </li> </ul> Why it ignored href attribute of the first link? What is the correct way to do what I'm trying to do? One more additional question: if I'm using jsfc everywhere I can then what should I do with components from f: namespace? Where should be <f:view> placed? Maybe in the template.xhtml? Or I should simply ignore it?

    Read the article

  • Some frames are not showing in Frame Animation

    - by Aju Vidyadharan
    I am doing a frame to frame animation. My problem is I have given around 10 drawable images in my anim xml. But only first two and last two is showing not all the images. I am doing a translation also on this image.After translation only frame animation starts.Translation is happening and frame animation also happening but it is not showing all the frames. Here is my anim xml. only frog_01 and frog_02 is showing. <animation-list xmlns:android="http://schemas.android.com/apk/res/android" android:oneshot="true" > <item android:drawable="@drawable/frog_01" android:duration="70"/> <item android:drawable="@drawable/frog_02" android:duration="70"/> <item android:drawable="@drawable/frog_03" android:duration="70"/> <item android:drawable="@drawable/frog_04" android:duration="70"/> <item android:drawable="@drawable/frog_05" android:duration="70"/> <item android:drawable="@drawable/frog_04" android:duration="70"/> <item android:drawable="@drawable/frog_03" android:duration="70"/> <item android:drawable="@drawable/frog_02" android:duration="70"/> <item android:drawable="@drawable/frog_01" android:duration="70"/> </animation-list> Here is the code which I am using for the translation and Frame animation... public void frogAnim() { frogView.clearAnimation(); final TranslateAnimation fslide2 = new TranslateAnimation(10, 65, 0, 0); fslide2.setDuration(400); fslide2.setFillAfter(true); fslide2.setAnimationListener(fanimationListener1); frogView.startAnimation(fslide2); c = false; } AnimationListener fanimationListener1 = new AnimationListener() { public void onAnimationEnd(Animation arg0) { c = true; frogView.setBackgroundResource(R.drawable.frog_movement); frogFrameAnimation = (AnimationDrawable) frogView.getBackground(); frogFrameAnimation.start(); playAudioFileListener(R.raw.frog, player); CountDownTimer count = new CountDownTimer(200, 700) { @Override public void onTick(long millisUntilFinished) { } @Override public void onFinish() { frogFrameAnimation.stop(); titileAnimMusic(R.drawable.frog_title, R.anim.alpha_fade_in1, R.raw.vo_child_frog, player); } }; count.start(); } public void onAnimationRepeat(Animation animation) { // TODO Auto-generated method stub } public void onAnimationStart(Animation animation) { } };

    Read the article

  • Stumbleupon type query...

    - by Chris Denman
    Wow, makes your head spin! I am about to start a project, and although my mySql is OK, I can't get my head around what required for this: I have a table of web addresses. id,url 1,http://www.url1.com 2,http://www.url2.com 3,http://www.url3.com 4,http://www.url4.com I have a table of users. id,name 1,fred bloggs 2,john bloggs 3,amy bloggs I have a table of categories. id,name 1,science 2,tech 3,adult 4,stackoverflow I have a table of categories the user likes as numerical ref relating to the category unique ref. For example: user,category 1,4 1,6 1,7 1,10 2,3 2,4 3,5 . . . I have a table of scores relating to each website address. When a user visits one of these sites and says they like it, it's stored like so: url_ref,category 4,2 4,3 4,6 4,2 4,3 5,2 5,3 . . . So based on the above data, URL 4 would score (in it's own right) as follows: 2=2 3=2 6=1 What I was hoping to do was pick out a random URL from over 2,000,000 records based on the current users interests. So if the logged in user likes categories 1,2,3 then I would like to ORDER BY a score generated based on their interest. If the logged in user likes categories 2 3 and 6 then the total score would be 5. However, if the current logged in user only like categories 2 and 6, the URL score would be 3. So the order by would be in context of the logged in users interests. Think of stumbleupon. I was thinking of using a set of VIEWS to help with sub queries. I'm guessing that all 2,000,000 records will need to be looked at and based on the id of the url it will look to see what scores it has based on each selected category of the current user. So we need to know the user ID and this gets passed into the query as a constant from the start. Ain't got a clue! Chris Denman

    Read the article

  • Method interception in PHP 5.*

    - by Rolf
    Hi everybody, I'm implementing a Log system for PHP, and I'm a bit stuck. All the configuration is defined in an XML file, that declares every method to be logged. XML is well parsed and converted into a multidimensionnal array (classname = array of methods). So far, so good. Let's take a simple example: #A.php class A { public function foo($bar) { echo ' // Hello there !'; } public function bar($foo) { echo " $ù$ùmezf$z !"; } } #B.php class B { public function far($boo) { echo $boo; } } Now, let's say I've this configuration file: <interceptor> <methods class="__CLASS_DIR__A.php"> <method name="foo"> <log-level>INFO</log-level> <log-message>Transaction init</log-message> </method> </methods> <methods class="__CLASS_DIR__B.php"> <method name="far"> <log-level>DEBUG</log-level> <log-message>Useless</log-message> </method> </methods> </interceptor> The thing I'd like AT RUNTIME ONLY (once the XML parser has done his job) is: #Logger.php (its definitely NOT a final version) -- generated by the XML parser class Logger { public function __call($name,$args) { $log_level = args[0]; $args = array_slice($args,1); switch($method_name) { case 'foo': case 'far': //case ..... //write in log files break; } //THEN, RELAY THE CALL TO THE INITIAL METHOD } } #"dynamic" A.php class A extends Logger { public function foo($log_level, $bar) { echo ' // Hello there !'; } public function bar($foo) { echo " $ù$ùmezf$z !"; } } #"dynamic" B.php class B extends Logger { public function far($log_level, $boo) { echo $boo; } } The big challenge here is to transform A and B into their "dynamic" versions, once the XML parser has completed its job. The ideal would be to achieve that without modifying the code of A and B at all (I mean, in the files) - or at least find a way to come back to their original versions once the program is finished. To be clear, I wanna find the most proper way to intercept method calls in PHP. What are your ideas about it ??? Thanks in advance, Rolf

    Read the article

  • TicTacToe AI Making Incorrect Decisions

    - by Chris Douglass
    A little background: as a way to learn multinode trees in C++, I decided to generate all possible TicTacToe boards and store them in a tree such that the branch beginning at a node are all boards that can follow from that node, and the children of a node are boards that follow in one move. After that, I thought it would be fun to write an AI to play TicTacToe using that tree as a decision tree. TTT is a solvable problem where a perfect player will never lose, so it seemed an easy AI to code for my first time trying an AI. Now when I first implemented the AI, I went back and added two fields to each node upon generation: the # of times X will win & the # of times O will win in all children below that node. I figured the best solution was to simply have my AI on each move choose and go down the subtree where it wins the most times. Then I discovered that while it plays perfect most of the time, I found ways where I could beat it. It wasn't a problem with my code, simply a problem with the way I had the AI choose it's path. Then I decided to have it choose the tree with either the maximum wins for the computer or the maximum losses for the human, whichever was more. This made it perform BETTER, but still not perfect. I could still beat it. So I have two ideas and I'm hoping for input on which is better: 1) Instead of maximizing the wins or losses, instead I could assign values of 1 for a win, 0 for a draw, and -1 for a loss. Then choosing the tree with the highest value will be the best move because that next node can't be a move that results in a loss. It's an easy change in the board generation, but it retains the same search space and memory usage. Or... 2) During board generation, if there is a board such that either X or O will win in their next move, only the child that prevents that win will be generated. No other child nodes will be considered, and then generation will proceed as normal after that. It shrinks the size of the tree, but then I have to implement an algorithm to determine if there is a one move win and I think that can only be done in linear time (making board generation a lot slower I think?) Which is better, or is there an even better solution?

    Read the article

  • Exception in thread "main" java.lang.StackOverflowError

    - by Ray.R.Chua
    I have a piece of code and I could not figure out why it is giving me Exception in thread "main" java.lang.StackOverflowError. This is the question: Given a positive integer n, prints out the sum of the lengths of the Syracuse sequence starting in the range of 1 to n inclusive. So, for example, the call: lengths(3) will return the the combined length of the sequences: 1 2 1 3 10 5 16 8 4 2 1 which is the value: 11. lengths must throw an IllegalArgumentException if its input value is less than one. My Code: import java.util.HashMap; public class Test { HashMap<Integer,Integer> syraSumHashTable = new HashMap<Integer,Integer>(); public Test(){ } public int lengths(int n)throws IllegalArgumentException{ int sum =0; if(n < 1){ throw new IllegalArgumentException("Error!! Invalid Input!"); } else{ for(int i =1; i<=n;i++){ if(syraSumHashTable.get(i)==null) { syraSumHashTable.put(i, printSyra(i,1)); sum += (Integer)syraSumHashTable.get(i); } else{ sum += (Integer)syraSumHashTable.get(i); } } return sum; } } private int printSyra(int num, int count){ int n = num; if(n == 1){ return count; } else{ if(n%2==0){ return printSyra(n/2, ++count); } else{ return printSyra((n*3)+1, ++count) ; } } } } Driver code: public static void main(String[] args) { // TODO Auto-generated method stub Test s1 = new Test(); System.out.println(s1.lengths(90090249)); //System.out.println(s1.lengths(5)); } . I know the problem lies with the recursion. The error does not occur if the input is a small value, example: 5. But when the number is huge, like 90090249, I got the Exception in thread "main" java.lang.StackOverflowError. Thanks all for your help. :) I almost forgot the error msg: Exception in thread "main" java.lang.StackOverflowError at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:65) at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:65) at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:60) at Test.printSyra(Test.java:60)

    Read the article

< Previous Page | 436 437 438 439 440 441 442 443 444 445 446 447  | Next Page >