Search Results

Search found 1127 results on 46 pages for 'translation'.

Page 41/46 | < Previous Page | 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • DNS and DHCP dies after ~2 days of use on ClearOS

    - by TheLQ
    I'm using ClearOS (based on CentOS, so any info specific to it should apply here) as a gateway, DHCP, and DNS server. I had this server running perfectly for a month or two before replacing it with another server. However due DNS and DHCP failing 2 days in and a host of other performance issues (the box was a little underpowered), I changed back to the origional server. However 2 days in DHCP and DNS are failing again, and I'm out of idea's on why. In both cases to my knowledge no network or server changes occurred after installation. Right after installing (and at least a day in) DNS and DHCP was working just fine. However later (Day 2) I get a call saying their internet is down (translation: Nobody can get to websites because DNS is down) I've tried to fix the problem by checking if the dnsmasq is even running (it is), restarting the service, and restarting the server to no effect. I do have two internal servers that have static DHCP leases but one's lease must of expired as I can't connect to it anymore. I'm hesitant to do any dhcp testing on the last server as I'll not be able to connect to it anymore. Is there anything anyone can think of on why DNS and DHCP would fail 2 days in to running perfectly? More info: Running dnsmasq in debug mode. This is all that's displayed even when running nslookup quackwall. I'm not sure though if nslookup commands should show up in the log [root@quackwall ~]# /usr/sbin/dnsmasq -dq dnsmasq: started, version 2.49 cachesize 150 dnsmasq: compile time options: IPv6 GNU-getopt no-DBus no-I18N DHCP TFTP dnsmasq-dhcp: DHCP, IP range 10.0.0.100 -- 10.0.0.254, lease time 12h dnsmasq: reading /etc/resolv.conf dnsmasq: using nameserver 74.128.17.114#53 dnsmasq: using nameserver 74.128.19.102#53 dnsmasq: read /etc/hosts - 5 addresses dnsmasq-dhcp: read /etc/ethers - 2 addresses On the other server DNS and the Gateway are all configured correctly (10.0.0.2 is quackwall) lordquackstar@quackgame:~$ netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.0.0.0 0.0.0.0 255.255.240.0 U 0 0 0 eth0 0.0.0.0 10.0.0.2 0.0.0.0 UG 0 0 0 eth0 lordquackstar@quackgame:~$ cat /etc/resolv.conf nameserver 10.0.0.2 domain highwow.lan search highwow.lan

    Read the article

  • Custom built machine has much higher power consumption than expected

    - by foraidt
    I built a machine according to the specs of a computer magazine (c't, Germany). According to the magazine, the power consumption should be at around 10W. I don't want to go into the specifics of the hardware but rather ask for general advice on where to look: I updated the BIOS/UEFI version to the latest version, installed all the recommended drivers and unplugged all hardware that's not necessary to boot into Windows. All that was left is the power supply, mainboard, cpu, cpu cooler and one SSD drive. But still I measured a power consumption of 50W, which is 40W more than it should be. I tried booting Linux Mint from a USB stick, so I don't think it's some Windows-related problem.. Where else could I look? Update 1 I dind't want the question to get closed for being too localized but if more details are necessary, here they are: The system is a desktop PC. The power consumption is measured using a Brennenstuhl PM 231 device, which was tested also by c't and they found it quite accurate. The PSU is an Enermax ETL300AWT, the mainboard Intel DH87RL (Socket 1150) and the CPU Intel G3220 (Haswell). Update 2 There is no online version of the article*. The most details I found can be read on its project page (in German, though...) (*)You can pay for downloadable PDFs, however. English translation of that project page Update 3 Regarding the sceptics: It may sound ridiculous but apparently 10W idle consumption is possible with Intel's Haswell architecture. As a kind of proof, there's an additional Blog article explicitly listing the steps needed to reduce the idle consumption to 10W. Additional hardware: I measured the consumption without the HDD, and as expected the usage dropped by around 10W. I have no chassis fans and the CPU fan is a "Scythe Mugen 4" model. It runs at around 600rpm so I think it won't draw much. When stripping off all my extra components I should be at 10W. But I'm not getting anywhere near that. I would be happy to see "just" 15W in the stripped down version but currently I'm not getting below 50W no matter which component I remove. As I see it this cannot be explained by the PSU being less efficient at lower consumption. I also waited half an hour or so (also checked that no Windows updates were running in the background) and the consumption dind't drop by more than a few watts.

    Read the article

  • Pinning based on origin of a reprepro repository.

    - by Shtééf
    I'm on Ubuntu 10.04, and trying to set up a repository using reprepro. I'd also like the pin everything in that repository to be preferred over anything else, even if packages are older versions. (It will only contain a select set of packages.) However, I cannot seem to get the pinning to work, and believe it has something to do with the repository side of things, rather than the apt configuration on the client. I've taken the following steps to set up my repository Installed a web server (my personal choice here is Cherokee), Created the directory /var/www/apt/, Created the file conf/distributions, like so: Origin: Shteef Label: Shteef Suite: lucid Version: 10.04 Codename: lucid Architectures: i386 amd64 source Components: main Description: My personal repository Ran reprepro export from the /var/www/apt/ directory. Now on any other machine, I can add this (empty) repository over HTTP to my /etc/apt/sources.list, and run apt-get update without any errors: Ign http://archive.lan lucid Release.gpg Ign http://archive.lan/apt/ lucid/main Translation-en_US Get:1 http://archive.lan lucid Release [2,244B] Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Ign http://archive.lan lucid/main Packages Ign http://archive.lan lucid/main Sources Hit http://archive.lan lucid/main Packages Hit http://archive.lan lucid/main Sources In my case, now I want to use an old version of Asterisk, namely Asterisk 1.4. I rebuilt the asterisk-1:1.4.21.2~dfsg-3ubuntu2.1 package from Ubuntu 9.04 (with some small changes to fix dependencies) and uploaded it to my repository. At this point I can see the new package in aptitude, but it naturally prefers the newer Asterisk 1.6 currently in the Ubuntu 10.04 repositories. To try and fix that, I have created /etc/apt/preferences.d/personal like so: Package: * Pin: release o=Shteef Pin-Priority: 1000 But when I try to install the asterisk package, it will still prefer the 1.6 version over my own 1.4 version. This is what apt-cache policy asterisk shows: asterisk: Installed: (none) Candidate: 1:1.6.2.5-0ubuntu1 Version table: 1:1.6.2.5-0ubuntu1 0 500 http://nl.archive.ubuntu.com/ubuntu/ lucid/universe Packages 1:1.4.21.2~dfsg-3ubuntu2.1shteef1 0 500 http://archive.lan/apt/ lucid/main Packages Clearly, it is not picking up my pin. In fact, when I run just apt-cache policy, I get the following: Package files: 100 /var/lib/dpkg/status release a=now 500 http://archive.lan/apt/ lucid/main Packages origin archive.lan 500 http://security.ubuntu.com/ubuntu/ lucid-security/multiverse Packages release v=10.04,o=Ubuntu,a=lucid-security,n=lucid,l=Ubuntu,c=multiverse origin security.ubuntu.com [...] Unlike Ubuntu's repository, apt doesn't seem to pick up a release-line at all for my own repository. I'm suspecting this is the cause why I can't pin on release o=Shteef in my preferences file. But I can't find any noticable difference between my repository's Release files and Ubuntu's that would cause this. Is there a step I've missed or mistake I've made in setting up my repository?

    Read the article

  • Mercurial browser on Windows 2003 takes several refreshes before displaying repositories

    - by Tim Murphy
    When attempt to browse my Mercurial repositories it usually takes several refreshes before the repository list is displayed. The configuration is as follows: Windows Server 2003 (Dedicated machine hosted by http://www.server4you.com/. Site has anonymous password protection with self-signed SSL. Mercurial 1.5.3 Python 2.6.5 Python for Windows 32 extensions 214 py2.6 isapi-wsgi 0.4.2 The repositories are being served via ISAPI using the standard hgwebdir_wspi.py file (copy to follow). Other problems with the repository server: Before doing a clone/push/etc I have to browse the repositories first otherwise hg on my local machine can not locate the site. I have one a repository with a large changeset that after a minute or so throw error "abort: error: An existing connection was forcibly closed by the remote host". Will be asking another question for this problem. What can I do to start tracking down this problem? hgwebdir_wsgi.py # Configuration file location hgweb_config = r'C:\Public\Mercurial\WebSite\hgweb.config' # Global settings for IIS path translation path_strip = 0 # Strip this many path elements off (when using url rewrite) path_prefix = 0 # This many path elements are prefixes (depends on the # virtual path of the IIS application). import sys # Adjust python path if this is not a system-wide install #sys.path.insert(0, r'c:\path\to\python\lib') # Enable tracing. Run 'python -m win32traceutil' to debug if hasattr(sys, 'isapidllhandle'): import win32traceutil # To serve pages in local charset instead of UTF-8, remove the two lines below import os os.environ['HGENCODING'] = 'UTF-8' import isapi_wsgi from mercurial import demandimport; demandimport.enable() from mercurial.hgweb.hgwebdir_mod import hgwebdir # Example tweak: Replace isapi_wsgi's handler to provide better error message # Other stuff could also be done here, like logging errors etc. class WsgiHandler(isapi_wsgi.IsapiWsgiHandler): error_status = '500 Internal Server Error' # less silly error message isapi_wsgi.IsapiWsgiHandler = WsgiHandler # Only create the hgwebdir instance once application = hgwebdir(hgweb_config) def handler(environ, start_response): # Translate IIS's weird URLs url = environ['SCRIPT_NAME'] + environ['PATH_INFO'] paths = url[1:].split('/')[path_strip:] script_name = '/' + '/'.join(paths[:path_prefix]) path_info = '/'.join(paths[path_prefix:]) if path_info: path_info = '/' + path_info environ['SCRIPT_NAME'] = script_name environ['PATH_INFO'] = path_info return application(environ, start_response) def __ExtensionFactory__(): return isapi_wsgi.ISAPISimpleHandler(handler) if __name__=='__main__': from isapi.install import * params = ISAPIParameters() HandleCommandLine(params) hgweb.config [paths] / = C:\Public\Mercurial\Repositories\* [web] allow_archive = bz2 gz zip ; Allows archive downloads. allow_push = ######## ; Users that are allowed to push.

    Read the article

  • .NET vs Windows 8

    - by Simon Cooper
    So, day 1 of DevWeek. Lots and lots of Windows 8 and WinRT, as you would expect. The keynote had some actual content in it, fleshed out some of the details of how your apps linked into the Metro infrastructure, and confirmed that there would indeed be an enterprise version of the app store available for Metro apps.) However, that's, not what I want to focus this post on. What I do want to focus on is this: Windows 8 does not make .NET developers obsolete. Phew! .NET in the New Ecosystem In all the hype around Windows 8 the past few months, a lot of developers have got the impression that .NET has been sidelined in Windows 8; C++ and COM is back in vogue, and HTML5 + JavaScript is the New Way of writing applications. You know .NET? It's yesterday's tech. Enter the 21st Century and write <div>! However, after speaking to people at the conference, and after a couple of talks by Dave Wheeler on the innards of WinRT and how .NET interacts with it, my views on the coming operating system have changed somewhat. To summarize what I've picked up, in no particular order (none of this is official, just my sense of what's been said by various people): Metro apps do not replace desktop apps. That is, Windows 8 fully supports .NET desktop applications written for every other previous version of Windows, and will continue to do so in the forseeable future. There are some apps that simply do not fit into Metro. They do not fit into the touch-based paradigm, and never will. Traditional desktop support is not going away anytime soon. The reason Silverlight has been hidden in all the Metro hype is that Metro is essentially based on Silverlight design principles. Silverlight developers will have a much easier time writing Metro apps than desktop developers, as they would already be used to all the principles of sandboxing and separation introduced with Silverlight. It's desktop developers who are going to have to adapt how they work. .NET + XAML is equal to HTML5 + JS in importance. Although the underlying WinRT system is built on C++ & COM, most application development will be done either using .NET or HTML5. Both systems have their own wrapper around the underlying WinRT infrastructure, hiding the implementation details. The CLR is unchanged; it's still the .NET 4 CLR, running IL in .NET assemblies. The thing that changes between desktop and Metro is the class libraries, which have more in common with the Silverlight libraries than the desktop libraries. In Metro, although all the types look and behave the same to callers, some of the core BCL types are now wrappers around their WinRT equivalents. These wrappers are then enhanced using standard .NET types and code to produce the Metro .NET class libraries. You can't simply port a desktop app into Metro. The underlying file IO, network, timing and database access is either completely different or simply missing. Similarly, although the UI is programmed using XAML, the behaviour of the Metro XAML is different to WPF or Silverlight XAML. Furthermore, the new design principles and touch-based interface for Metro applications demand a completely new UI. You will be able to re-use sections of your app encapsulating pure program logic, but everything else will need to be written from scratch. Microsoft has taken the opportunity to remove a whole raft of types and methods from the Metro framework that are obsolete (non-generic collections) or break the sandbox (synchronous APIs); if you use these, you will have to rewrite to use the alternatives, if they exist at all, to move your apps to Metro. If you want to write public WinRT components in .NET, there are some quite strict rules you have to adhere to. But the compilers know about these rules; you can write them in C# or VB, and the compilers will tell you when you do something that isn't allowed and deal with the translation to WinRT metadata rather than .NET assemblies. It is possible to write a class library that can be used in Metro and desktop applications. However, you need to be very careful not to use types that are available in one but not the other. One can imagine developers writing their own abstraction around file IO and UIs (MVVM anyone?) that can be implemented differently in Metro and desktop, but look the same within your shared library. So, if you're a .NET developer, you have a lot less to worry about. .NET is a viable platform on Metro, and traditional desktop apps are not going away. You don't have to learn HTML5 and JavaScript if you don't want to. Hurray!

    Read the article

  • BPM Suite 11gR1 Released

    - by Manoj Das
    This morning (April 27th, 2010), Oracle BPM Suite 11gR1 became available for download from OTN and eDelivery. If you have been following our plans in this area, you know that this is the release unifying BEA ALBPM product, which became Oracle BPM10gR3, with the Oracle stack. Some of the highlights of this release are: BPMN 2.0 modeling and simulation Web based Process Composer for BPMN and Rules authoring Zero-code environment with full access to Oracle SOA Suite’s rich set of application and other adapters Process Spaces – Out-of-box integration with Web Center Suite Process Analytics – Native process cubes as well as integration with Oracle BAM You can learn more about this release from the documentation. Notes about downloading and installing Please note that Oracle BPM Suite 11gR1 is delivered and installed as part of SOA 11.1.1.3.0, which is a sparse release (only incremental patch). To install: Download and install SOA 11.1.1.2.0, which is a full release (you can find the bits at the above location) Download and install SOA 11.1.1.3.0 During configure step (using the Fusion Middleware configuration wizard), use the Oracle Business Process Management template supplied with the SOA Suite11g (11.1.1.3.0) If you plan to use Process Spaces, also install Web Center 11.1.1.3.0, which also is delivered as a sparse release and needs to be installed on top of Web Center 11.1.1.2.0 Some early feedback We have been receiving very encouraging feedback on this release. Some quotes from partners are included below: “I just attended a preview workshop on BPM Studio, Oracle's BPMN 2.0 tool, held by Clemens Utschig Utschig from Oracle HQ. The usability and ease to get started are impressive. In the business view analysts can intuitively start modeling, then developers refine in their own, more technical view. The BPM Studio sets itself apart from pure play BPMN 2.0 tools by being seamlessly integrated inside a holistic SOA / BPM toolset: BPMN models are placed in SCA-Composites in SOA Suite 11g. This allows to abstract away the complexities of SOA integration aspects from business process aspects. For UIs in BPMN tasks, you have the richness of ADF 11g based Frontends. With BPM Studio we architects have a new modeling and development IDE that gives us interesting design challenges to grasp and elaborate, since many things BPMN 2.0 are different from good ol' BPEL. For example, for simple transformations, you don't use BPEL "assign" any more, but add the transformation directly to the service call. There is much less XPath involved. And, there is no translation from model to BPEL code anymore, so the awkward process model to BPEL roundtrip, which never really worked as well as it looked on marketing slides, is obsolete: With BPMN 2.0 "the model is the code". Now, these are great times to start the journey into BPM! Some tips: Start Projects smoothly, with initial processes being not overly complex and not using the more esoteric areas of BPMN, to manage the learning path and to stay successful with each iteration. Verify non functional requirements by conducting performance and load tests early. As mentioned above, separate all technical integration logic into SOA Suite or Oracle Service Bus. And - share your experience!” Hajo Normann, SOA Architect - Oracle ACE Director - Co-Leader DOAG SIG SOA   "Reuse of components across the Oracle 11G Fusion Middleware stack, like for instance a Database Adapter, is essential. It improves stability and predictability of the solution. BPM just is one of the components plugging into the stack and reuses all other components." Mr. Leon Smiers, Oracle Solution Architect, Capgemini   “I had the opportunity to follow a hands-on workshop held by Clemens for Oracle partners and I was really impressed of the overall offering of BPM11g. BPM11g allows the execution of BPMN 2.0 processes, without having to transform/translate them first to BPEL in order to be executable. The fact that BPMN uses the same underlying service infrastructure of SOA Suite 11g has a lot of benefits for us already familiar with SOA Suite 11g. BPMN is just another SCA component within a SCA composite and can (re)use all the existing components like Rules, Human Workflow, Adapters and Mediator. I also like the fact that BPMN runs on the same service engine as BPEL. By that all known best practices for making a BPEL  process reliable are valid for BPMN processes as well. Last but not least, BPMN is integrated into the superior end-to-end tracing of SOA Suite 11g. With BPM11g, Oracle offers a very competitive product which will have a big effect on the IT market. Clemens and Jürgen: Thanks for the great workshop! I’m really looking forward to my first project using Oracle BPM11g!” Guido Schmutz, Technology Manager / Oracle ACE Director for Fusion Middleware and SOA, Company:  Trivadis Some earlier feedback were summarized in this post.

    Read the article

  • JDK bug migration: components and subcomponents

    - by darcy
    One subtask of the JDK migration from the legacy bug tracking system to JIRA was reclassifying bugs from a three-level taxonomy in the legacy system, (product, category, subcategory), to a fundamentally two-level scheme in our customized JIRA instance, (component, subcomponent). In the JDK JIRA system, there is technically a third project-level classification, but by design a large majority of JDK-related bugs were migrated into a single "JDK" project. In the end, over 450 legacy subcategories were simplified into about 120 subcomponents in JIRA. The 120 subcomponents are distributed among 17 components. A rule of thumb used was that a subcategory had to have at least 50 bugs in it for it to be retained. Below is a listing the component / subcomponent classification of the JDK JIRA project along with some notes and guidance on which OpenJDK email addresses cover different areas. Eventually, a separate incidents project to host new issues filed at bugs.sun.com will use a slightly simplified version of this scheme. The preponderance of bugs and subcomponents for the JDK are in library-related areas, with components named foo-libs and subcomponents primarily named after packages. While there was an overall condensation of subcomponents in the migration, in some cases long-standing informal divisions in core libraries based on naming conventions in the description were promoted to formal subcomponents. For example, hundreds of bugs in the java.util subcomponent whose descriptions started with "(coll)" were moved into java.util:collections. Likewise, java.lang bugs starting with "(reflect)" and "(proxy)" were moved into java.lang:reflect. client-libs (Predominantly discussed on 2d-dev and awt-dev and swing-dev.) 2d demo java.awt java.awt:i18n java.beans (See beans-dev.) javax.accessibility javax.imageio javax.sound (See sound-dev.) javax.swing core-libs (See core-libs-dev.) java.io java.io:serialization java.lang java.lang.invoke java.lang:class_loading java.lang:reflect java.math java.net java.nio (Discussed on nio-dev.) java.nio.charsets java.rmi java.sql java.sql:bridge java.text java.util java.util.concurrent java.util.jar java.util.logging java.util.regex java.util:collections java.util:i18n javax.annotation.processing javax.lang.model javax.naming (JNDI) javax.script javax.script:javascript javax.sql org.openjdk.jigsaw (See jigsaw-dev.) security-libs (See security-dev.) java.security javax.crypto (JCE: includes SunJCE/MSCAPI/UCRYPTO/ECC) javax.crypto:pkcs11 (JCE: PKCS11 only) javax.net.ssl (JSSE, includes javax.security.cert) javax.security javax.smartcardio javax.xml.crypto org.ietf.jgss org.ietf.jgss:krb5 other-libs corba corba:idl corba:orb corba:rmi-iiop javadb other (When no other subcomponent is more appropriate; use judiciously.) Most of the subcomponents in the xml component are related to jaxp. xml jax-ws jaxb javax.xml.parsers (JAXP) javax.xml.stream (JAXP) javax.xml.transform (JAXP) javax.xml.validation (JAXP) javax.xml.xpath (JAXP) jaxp (JAXP) org.w3c.dom (JAXP) org.xml.sax (JAXP) For OpenJDK, most JVM-related bugs are connected to the HotSpot Java virtual machine. hotspot (See hotspot-dev.) build compiler (See hotspot-compiler-dev.) gc (garbage collection, see hotspot-gc-dev.) jfr (Java Flight Recorder) jni (Java Native Interface) jvmti (JVM Tool Interface) mvm (Multi-Tasking Virtual Machine) runtime (See hotspot-runtime-dev.) svc (Servicability) test core-svc (See serviceability-dev.) debugger java.lang.instrument java.lang.management javax.management tools The full JDK bug database contains entries related to legacy virtual machines that predate HotSpot as well as retired APIs. vm-legacy jit (Sun Exact VM) jit_symantec (Symantec VM, before Exact VM) jvmdi (JVM Debug Interface ) jvmpi (JVM Profiler Interface ) runtime (Exact VM Runtime) Notable command line tools in the $JDK/bin directory have corresponding subcomponents. tools appletviewer apt (See compiler-dev.) hprof jar javac (See compiler-dev.) javadoc(tool) (See compiler-dev.) javah (See compiler-dev.) javap (See compiler-dev.) jconsole launcher updaters (Timezone updaters, etc.) visualvm Some aspects of JDK infrastructure directly affect JDK Hg repositories, but other do not. infrastructure build (See build-dev and build-infra-dev.) licensing (Covers updates to the third party readme, licenses, and similar files.) release_eng (Release engineering) staging (Staging of web pages related to JDK releases.) The specification subcomponent encompasses the formal language and virtual machine specifications. specification language (The Java Language Specification) vm (The Java Virtual Machine Specification) The code for the deploy and install areas is not currently included in OpenJDK. deploy deployment_toolkit plugin webstart install auto_update install servicetags In the JDK, there are a number of cross-cutting concerns whose organization is essentially orthogonal to other areas. Since these areas generally have dedicated teams working on them, it is easier to find bugs of interest if these bugs are grouped first by their cross-cutting component rather than by the affected technology. docs doclet guides hotspot release_notes tools tutorial embedded build hotspot libraries globalization locale-data translation performance hotspot libraries The list of subcomponents will no doubt grow over time, but my inclination is to resist that growth since the addition of each subcomponent makes the system as a whole more complicated and harder to use. When the system gets closer to being externalized, I plan to post more blog entries describing recommended use of various custom fields in the JDK project.

    Read the article

  • Short Season, Long Models - Dealing with Seasonality

    - by Michel Adar
    Accounting for seasonality presents a challenge for the accurate prediction of events. Examples of seasonality include: ·         Boxed cosmetics sets are more popular during Christmas. They sell at other times of the year, but they rise higher than other products during the holiday season. ·         Interest in a promotion rises around the time advertising on TV airs ·         Interest in the Sports section of a newspaper rises when there is a big football match There are several ways of dealing with seasonality in predictions. Time Windows If the length of the model time windows is short enough relative to the seasonality effect, then the models will see only seasonal data, and therefore will be accurate in their predictions. For example, a model with a weekly time window may be quick enough to adapt during the holiday season. In order for time windows to be useful in dealing with seasonality it is necessary that: The time window is significantly shorter than the season changes There is enough volume of data in the short time windows to produce an accurate model An additional issue to consider is that sometimes the season may have an abrupt end, for example the day after Christmas. Input Data If available, it is possible to include the seasonality effect in the input data for the model. For example the customer record may include a list of all the promotions advertised in the area of residence. A model with these inputs will have to learn the effect of the input. It is possible to learn it specific to the promotion – and by the way learn about inter-promotion cross feeding – by leaving the list of ads as it is; or it is possible to learn the general effect by having a flag that indicates if the promotion is being advertised. For inputs to properly represent the effect in the model it is necessary that: The model sees enough events with the input present. For example, by virtue of the model lifetime (or time window) being long enough to see several “seasons” or by having enough volume for the model to learn seasonality quickly. Proportional Frequency If we create a model that ignores seasonality it is possible to use that model to predict how the specific person likelihood differs from average. If we have a divergence from average then we can transfer that divergence proportionally to the observed frequency at the time of the prediction. Definitions: Ft = trailing average frequency of the event at time “t”. The average is done over a suitable period of to achieve a statistical significant estimate. F = average frequency as seen by the model. L = likelihood predicted by the model for a specific person Lt = predicted likelihood proportionally scaled for time “t”. If the model is good at predicting deviation from average, and this holds over the interesting range of seasons, then we can estimate Lt as: Lt = L * (Ft / F) Considering that: L = (L – F) + F Substituting we get: Lt = [(L – F) + F] * (Ft / F) Which simplifies to: (i)                  Lt = (L – F) * (Ft / F)  +  Ft This latest expression can be understood as “The adjusted likelihood at time t is the average likelihood at time t plus the effect from the model, which is calculated as the difference from average time the proportion of frequencies”. The formula above assumes a linear translation of the proportion. It is possible to generalize the formula using a factor which we will call “a” as follows: (ii)                Lt = (L – F) * (Ft / F) * a  +  Ft It is also possible to use a formula that does not scale the difference, like: (iii)               Lt = (L – F) * a  +  Ft While these formulas seem reasonable, they should be taken as hypothesis to be proven with empirical data. A theoretical analysis provides the following insights: The Cumulative Gains Chart (lift) should stay the same, as at any given time the order of the likelihood for different customers is preserved If F is equal to Ft then the formula reverts to “L” If (Ft = 0) then Lt in (i) and (ii) is 0 It is possible for Lt to be above 1. If it is desired to avoid going over 1, for relatively high base frequencies it is possible to use a relative interpretation of the multiplicative factor. For example, if we say that Y is twice as likely as X, then we can interpret this sentence as: If X is 3%, then Y is 6% If X is 11%, then Y is 22% If X is 70%, then Y is 85% - in this case we interpret “twice as likely” as “half as likely to not happen” Applying this reasoning to (i) for example we would get: If (L < F) or (Ft < (1 / ((L/F) + 1)) Then  Lt = L * (Ft / F) Else Lt = 1 – (F / L) + (Ft * F / L)  

    Read the article

  • Trying to detect collision between two polygons using Separating Axis Theorem

    - by Holly
    The only collision experience i've had was with simple rectangles, i wanted to find something that would allow me to define polygonal areas for collision and have been trying to make sense of SAT using these two links Though i'm a bit iffy with the math for the most part i feel like i understand the theory! Except my implementation somewhere down the line must be off as: (excuse the hideous font) As mentioned above i have defined a CollisionPolygon class where most of my theory is implemented and then have a helper class called Vect which was meant to be for Vectors but has also been used to contain a vertex given that both just have two float values. I've tried stepping through the function and inspecting the values to solve things but given so many axes and vectors and new math to work out as i go i'm struggling to find the erroneous calculation(s) and would really appreciate any help. Apologies if this is not suitable as a question! CollisionPolygon.java: package biz.hireholly.gameplay; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import biz.hireholly.gameplay.Types.Vect; public class CollisionPolygon { Paint paint; private Vect[] vertices; private Vect[] separationAxes; CollisionPolygon(Vect[] vertices){ this.vertices = vertices; //compute edges and separations axes separationAxes = new Vect[vertices.length]; for (int i = 0; i < vertices.length; i++) { // get the current vertex Vect p1 = vertices[i]; // get the next vertex Vect p2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; // subtract the two to get the edge vector Vect edge = p1.subtract(p2); // get either perpendicular vector Vect normal = edge.perp(); // the perp method is just (x, y) => (-y, x) or (y, -x) separationAxes[i] = normal; } paint = new Paint(); paint.setColor(Color.RED); } public void draw(Canvas c, int xPos, int yPos){ for (int i = 0; i < vertices.length; i++) { Vect v1 = vertices[i]; Vect v2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; c.drawLine( xPos + v1.x, yPos + v1.y, xPos + v2.x, yPos + v2.y, paint); } } /* consider changing to a static function */ public boolean intersects(CollisionPolygon p){ // loop over this polygons separation exes for (Vect axis : separationAxes) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // loop over the other polygons separation axes Vect[] sepAxesOther = p.getSeparationAxes(); for (Vect axis : sepAxesOther) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // if we get here then we know that every axis had overlap on it // so we can guarantee an intersection return true; } /* Note projections wont actually be acurate if the axes aren't normalised * but that's not necessary since we just need a boolean return from our * intersects not a Minimum Translation Vector. */ private Vect minMaxProjection(Vect axis) { float min = axis.dot(vertices[0]); float max = min; for (int i = 1; i < vertices.length; i++) { float p = axis.dot(vertices[i]); if (p < min) { min = p; } else if (p > max) { max = p; } } Vect minMaxProj = new Vect(min, max); return minMaxProj; } public Vect[] getSeparationAxes() { return separationAxes; } public Vect[] getVertices() { return vertices; } } Vect.java: package biz.hireholly.gameplay.Types; /* NOTE: Can also be used to hold vertices! Projections, coordinates ect */ public class Vect{ public float x; public float y; public Vect(float x, float y){ this.x = x; this.y = y; } public Vect perp() { return new Vect(-y, x); } public Vect subtract(Vect other) { return new Vect(x - other.x, y - other.y); } public boolean overlap(Vect other) { if( other.x <= y || other.y >= x){ return true; } return false; } /* used specifically for my SAT implementation which i'm figuring out as i go, * references for later.. * http://www.gamedev.net/page/resources/_/technical/game-programming/2d-rotated-rectangle-collision-r2604 * http://www.codezealot.org/archives/55 */ public float scalarDotProjection(Vect other) { //multiplier = dot product / length^2 float multiplier = dot(other) / (x*x + y*y); //to get the x/y of the projection vector multiply by x/y of axis float projX = multiplier * x; float projY = multiplier * y; //we want to return the dot product of the projection, it's meaningless but useful in our SAT case return dot(new Vect(projX,projY)); } public float dot(Vect other){ return (other.x*x + other.y*y); } }

    Read the article

  • Microsoft BUILD 2013 Day 1&ndash;Keynote

    - by Tim Murphy
    Originally posted on: http://geekswithblogs.net/tmurphy/archive/2013/06/27/microsoft-build-2013-day-1ndashkeynote.aspx This one is going to be a little long because the keynote was jam-packed so bare with me. The keynote for the first day of BUILD 2013 was kicked off by Steve Balmer.  He made it very clear that Microsoft’s focus is on accelerating its time to market with products and product updates.  His quote was that “Rapid release” is the new norm.  He continued by showing off several new Lumias that have been buzzing around the internet for a while and announce that Sprint will now be carrying the HTC 8XT and Samsung ATIV. Balmer is known for repeating words or phrase for affect.  This time it was “Rapid release, rapid release” and “Touch, touch, touch, touch, touch, …”.  This was fun, but even more fun was when he announce that all attendees would receive an Acer Iconia 8” tablet. SCORE! The next subject Balmer focused on is new apps.  The three new ones were Flipboard, Facebook and NFL Fantasy Football.  I liked the first two because these are ones that people coming from other platforms are missing.  The NFL app is great just because it targets a demographic that can be fanatical.  If these types of apps keep coming than the missing app argument goes away. While many Negative Nancy’s are describing Windows 8.1 as Windows 180 Steve Balmer chose to call it a “refined blend” as in a coffee that has been improved with a new mix.  This includes more multi-tasking options and leveraging Bing straight throughout the entire ecosystem. He ended this first section by explaining that this will also bring more Bing development opportunities to the community. Steve Balmer was followed by Julie Larson-Green who spent her time on stage selling us on Windows 8 all over again from my point of view.  Something that I would not have thought was needed until I had listened to some other attendees who had a number of concerns and complaints.  She showed a number of new gestures that will come with Windows 8.1, and while they were cool I was left wondering if they really improved the experience.  I guess only time will tell. I did like the fact that it the UI implementation to bring up “All Apps” now mirrors that of Windows Phone.  The consistency is a big step forward that I hope to see continue.  The cool factor went up from there as she swiped content from a desktop (mega-tablet) to the XBox One.  This seamless experience I believe is what is really needed for any future platform to be relevant. I was much more enthused by the presentation of Antoine Leblond who humbled us by letting us know that there are 5k new API.  How that can be or how anyone would ever use all of them is another question.  His announcement was that the Visual Studio 2013 preview would be available today along with the Windows 8.1 bits.  One of the features of VS2013 that he demonstrated is the power consumption profiler.  With battery life being a key factor with consumer consumption devices this is a welcome addition. He didn’t limit his presentation to VS2013 features though.  He showed how the Store has been redesigned to enable better search and discoverability of apps and how Win 8.1 can perform multiple screen scales depending on the resolution of the device automatically.  The last feature he demoed was the real time video streaming API which he made sure we understood by attaching a Surface to a little robot.  Oh, but there was one more thing.  Antoine and Julie announce that all attendees would also be getting Surface Pros.  BONUS! How much more could there be?  Gurdeep Singh Pall was about to pile on.  He introduced us to Bing as a platform (BaaP?).  He said if they (Microsoft) could do something with and API that is good 3rd party developers can do something that is dynamite and showed us some of the tools they had produced.  These included natural user interface improvements such as voice commands that looked to put Siri to shame.  Add to that 3D, OCR and translation capabilities and the future looks to be full of opportunities. Balmer then came out to show us one last thing.  Project Spark is a game design environment that will be available for Windows 8.1, XBox 360 and XBox One.  All I can say is that if my kids get their hands on this they are going to be able to learn some of what dad does in a much more enjoyable way. At the end of it all I was both exhausted and energized by what I saw.  What could they have possibly left for the day 2 keynote?  I hear it will feature Scott Hanselman.  If that is right we are in for a treat.  See you there. del.icio.us Tags: BUILD 2013,Windows 8.1,Winodws Phone,XAML,Keynote,Bing,Visual Studio 2013,Project Spark

    Read the article

  • Error in my Separating Axis Theorem collision code

    - by Holly
    The only collision experience i've had was with simple rectangles, i wanted to find something that would allow me to define polygonal areas for collision and have been trying to make sense of SAT using these two links Though i'm a bit iffy with the math for the most part i feel like i understand the theory! Except my implementation somewhere down the line must be off as: (excuse the hideous font) As mentioned above i have defined a CollisionPolygon class where most of my theory is implemented and then have a helper class called Vect which was meant to be for Vectors but has also been used to contain a vertex given that both just have two float values. I've tried stepping through the function and inspecting the values to solve things but given so many axes and vectors and new math to work out as i go i'm struggling to find the erroneous calculation(s) and would really appreciate any help. Apologies if this is not suitable as a question! CollisionPolygon.java: package biz.hireholly.gameplay; import android.graphics.Canvas; import android.graphics.Color; import android.graphics.Paint; import biz.hireholly.gameplay.Types.Vect; public class CollisionPolygon { Paint paint; private Vect[] vertices; private Vect[] separationAxes; int x; int y; CollisionPolygon(Vect[] vertices){ this.vertices = vertices; //compute edges and separations axes separationAxes = new Vect[vertices.length]; for (int i = 0; i < vertices.length; i++) { // get the current vertex Vect p1 = vertices[i]; // get the next vertex Vect p2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; // subtract the two to get the edge vector Vect edge = p1.subtract(p2); // get either perpendicular vector Vect normal = edge.perp(); // the perp method is just (x, y) => (-y, x) or (y, -x) separationAxes[i] = normal; } paint = new Paint(); paint.setColor(Color.RED); } public void draw(Canvas c, int xPos, int yPos){ for (int i = 0; i < vertices.length; i++) { Vect v1 = vertices[i]; Vect v2 = vertices[i + 1 == vertices.length ? 0 : i + 1]; c.drawLine( xPos + v1.x, yPos + v1.y, xPos + v2.x, yPos + v2.y, paint); } } public void update(int xPos, int yPos){ x = xPos; y = yPos; } /* consider changing to a static function */ public boolean intersects(CollisionPolygon p){ // loop over this polygons separation exes for (Vect axis : separationAxes) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // loop over the other polygons separation axes Vect[] sepAxesOther = p.getSeparationAxes(); for (Vect axis : sepAxesOther) { // project both shapes onto the axis Vect p1 = this.minMaxProjection(axis); Vect p2 = p.minMaxProjection(axis); // do the projections overlap? if (!p1.overlap(p2)) { // then we can guarantee that the shapes do not overlap return false; } } // if we get here then we know that every axis had overlap on it // so we can guarantee an intersection return true; } /* Note projections wont actually be acurate if the axes aren't normalised * but that's not necessary since we just need a boolean return from our * intersects not a Minimum Translation Vector. */ private Vect minMaxProjection(Vect axis) { float min = axis.dot(new Vect(vertices[0].x+x, vertices[0].y+y)); float max = min; for (int i = 1; i < vertices.length; i++) { float p = axis.dot(new Vect(vertices[i].x+x, vertices[i].y+y)); if (p < min) { min = p; } else if (p > max) { max = p; } } Vect minMaxProj = new Vect(min, max); return minMaxProj; } public Vect[] getSeparationAxes() { return separationAxes; } public Vect[] getVertices() { return vertices; } } Vect.java: package biz.hireholly.gameplay.Types; /* NOTE: Can also be used to hold vertices! Projections, coordinates ect */ public class Vect{ public float x; public float y; public Vect(float x, float y){ this.x = x; this.y = y; } public Vect perp() { return new Vect(-y, x); } public Vect subtract(Vect other) { return new Vect(x - other.x, y - other.y); } public boolean overlap(Vect other) { if(y > other.x && other.y > x){ return true; } return false; } /* used specifically for my SAT implementation which i'm figuring out as i go, * references for later.. * http://www.gamedev.net/page/resources/_/technical/game-programming/2d-rotated-rectangle-collision-r2604 * http://www.codezealot.org/archives/55 */ public float scalarDotProjection(Vect other) { //multiplier = dot product / length^2 float multiplier = dot(other) / (x*x + y*y); //to get the x/y of the projection vector multiply by x/y of axis float projX = multiplier * x; float projY = multiplier * y; //we want to return the dot product of the projection, it's meaningless but useful in our SAT case return dot(new Vect(projX,projY)); } public float dot(Vect other){ return (other.x*x + other.y*y); } }

    Read the article

  • Relative cam movement and momentum on arbitrary surface

    - by user29244
    I have been working on a game for quite long, think sonic classic physics in 3D or tony hawk psx, with unity3D. However I'm stuck at the most fundamental aspect of movement. The requirement is that I need to move the character in mario 64 fashion (or sonic adventure) aka relative cam input: the camera's forward direction always point input forward the screen, left or right input point toward left or right of the screen. when input are resting, the camera direction is independent from the character direction and the camera can orbit the character when input are pressed the character rotate itself until his direction align with the direction the input is pointing at. It's super easy to do as long your movement are parallel to the global horizontal (or any world axis). However when you try to do this on arbitrary surface (think moving along complex curved surface) with the character sticking to the surface normal (basically moving on wall and ceiling freely), it seems harder. What I want is to achieve the same finesse of movement than in mario but on arbitrary angled surfaces. There is more problem (jumping and transitioning back to the real world alignment and then back on a surface while keeping momentum) but so far I didn't even take off the basics. So far I have accomplish moving along the curved surface and the relative cam input, but for some reason direction fail all the time (point number 3, the character align slowly to the input direction). Do you have an idea how to achieve that? Here is the code and some demo so far: The demo: https://dl.dropbox.com/u/24530447/flash%20build/litesonicengine/LiteSonicEngine5.html Camera code: using UnityEngine; using System.Collections; public class CameraDrive : MonoBehaviour { public GameObject targetObject; public Transform camPivot, camTarget, camRoot, relcamdirDebug; float rot = 0; //---------------------------------------------------------------------------------------------------------- void Start() { this.transform.position = targetObject.transform.position; this.transform.rotation = targetObject.transform.rotation; } void FixedUpdate() { //the pivot system camRoot.position = targetObject.transform.position; //input on pivot orientation rot = 0; float mouse_x = Input.GetAxisRaw( "camera_analog_X" ); // rot = rot + ( 0.1f * Time.deltaTime * mouse_x ); // wrapAngle( rot ); // //when the target object rotate, it rotate too, this should not happen UpdateOrientation(this.transform.forward,targetObject.transform.up); camRoot.transform.RotateAround(camRoot.transform.up,rot); //debug the relcam dir RelativeCamDirection() ; //this camera this.transform.position = camPivot.position; //set the camera to the pivot this.transform.LookAt( camTarget.position ); // } //---------------------------------------------------------------------------------------------------------- public float wrapAngle ( float Degree ) { while (Degree < 0.0f) { Degree = Degree + 360.0f; } while (Degree >= 360.0f) { Degree = Degree - 360.0f; } return Degree; } private void UpdateOrientation( Vector3 forward_vector, Vector3 ground_normal ) { Vector3 projected_forward_to_normal_surface = forward_vector - ( Vector3.Dot( forward_vector, ground_normal ) ) * ground_normal; camRoot.transform.rotation = Quaternion.LookRotation( projected_forward_to_normal_surface, ground_normal ); } float GetOffsetAngle( float targetAngle, float DestAngle ) { return ((targetAngle - DestAngle + 180)% 360) - 180; } //---------------------------------------------------------------------------------------------------------- void OnDrawGizmos() { Gizmos.DrawCube( camPivot.transform.position, new Vector3(1,1,1) ); Gizmos.DrawCube( camTarget.transform.position, new Vector3(1,5,1) ); Gizmos.DrawCube( camRoot.transform.position, new Vector3(1,1,1) ); } void OnGUI() { GUI.Label(new Rect(0,80,1000,20*10), "targetObject.transform.up : " + targetObject.transform.up.ToString()); GUI.Label(new Rect(0,100,1000,20*10), "target euler : " + targetObject.transform.eulerAngles.y.ToString()); GUI.Label(new Rect(0,100,1000,20*10), "rot : " + rot.ToString()); } //---------------------------------------------------------------------------------------------------------- void RelativeCamDirection() { float input_vertical_movement = Input.GetAxisRaw( "Vertical" ), input_horizontal_movement = Input.GetAxisRaw( "Horizontal" ); Vector3 relative_forward = Vector3.forward, relative_right = Vector3.right, relative_direction = ( relative_forward * input_vertical_movement ) + ( relative_right * input_horizontal_movement ) ; MovementController MC = targetObject.GetComponent<MovementController>(); MC.motion = relative_direction.normalized * MC.acceleration * Time.fixedDeltaTime; MC.motion = this.transform.TransformDirection( MC.motion ); //MC.transform.Rotate(Vector3.up, input_horizontal_movement * 10f * Time.fixedDeltaTime); } } Mouvement code: using UnityEngine; using System.Collections; public class MovementController : MonoBehaviour { public float deadZoneValue = 0.1f, angle, acceleration = 50.0f; public Vector3 motion ; //-------------------------------------------------------------------------------------------- void OnGUI() { GUILayout.Label( "transform.rotation : " + transform.rotation ); GUILayout.Label( "transform.position : " + transform.position ); GUILayout.Label( "angle : " + angle ); } void FixedUpdate () { Ray ground_check_ray = new Ray( gameObject.transform.position, -gameObject.transform.up ); RaycastHit raycast_result; Rigidbody rigid_body = gameObject.rigidbody; if ( Physics.Raycast( ground_check_ray, out raycast_result ) ) { Vector3 next_position; //UpdateOrientation( gameObject.transform.forward, raycast_result.normal ); UpdateOrientation( gameObject.transform.forward, raycast_result.normal ); next_position = GetNextPosition( raycast_result.point ); rigid_body.MovePosition( next_position ); } } //-------------------------------------------------------------------------------------------- private void UpdateOrientation( Vector3 forward_vector, Vector3 ground_normal ) { Vector3 projected_forward_to_normal_surface = forward_vector - ( Vector3.Dot( forward_vector, ground_normal ) ) * ground_normal; transform.rotation = Quaternion.LookRotation( projected_forward_to_normal_surface, ground_normal ); } private Vector3 GetNextPosition( Vector3 current_ground_position ) { Vector3 next_position; // //-------------------------------------------------------------------- // angle = 0; // Vector3 dir = this.transform.InverseTransformDirection(motion); // angle = Vector3.Angle(Vector3.forward, dir);// * 1f * Time.fixedDeltaTime; // // if(angle > 0) this.transform.Rotate(0,angle,0); // //-------------------------------------------------------------------- next_position = current_ground_position + gameObject.transform.up * 0.5f + motion ; return next_position; } } Some observation: I have the correct input, I have the correct translation in the camera direction ... but whenever I attempt to slowly lerp the direction of the character in direction of the input, all I get is wild spin! Sad Also discovered that strafing to the right (immediately at the beginning without moving forward) has major singularity trapping on the equator!! I'm totally lost and crush (I have already done a much more featured version which fail at the same aspect)

    Read the article

  • Knowing your user is key--Part 1: Motivation

    - by erikanollwebb
    I was thinking where the best place to start in this blog would be and finally came back to a theme that I think is pretty critical--successful gamification in the enterprise comes down to knowing your user.  Lots of folks will say that gamification is about understanding that everyone is a gamer.  But at least in my org, that argument won't play for a lot of people.  Pun intentional.  It's not that I don't see the attraction to the idea--really, very few people play no games at all.  If they don't play video games, they might play solitaire on their computer.  They may play card games, or some type of sport.  Mario Herger has some great facts on how much game playing there is going on at his Enterprise-Gamification.com website. But at the end of the day, I can't sell that into my organization well.  We are Oracle.  We make big, serious software designed run your whole business.  We don't make Angry Birds out of your financial reporting tools.  So I stick with the argument that works better.  Gamification techniques are really just good principals of user experience packaged a little differently.  Feedback?  We already know feedback is important when using software.  Progress indicators?  Got that too.  Game mechanics may package things in a more explicit way but it's not really "new".  To know how to use game mechanics, and what a user experience team is important for, is totally understanding who our users are and what they are motivated by. For several years, I taught college psychology courses, including Motivation.  Motivation is generally broken down into intrinsic and extrinsic motivation.  There's intrinsic, which comes from within the individual.  And there's extrinsic, which comes from outside the individual.  Intrinsic motivation is that motivation that comes from just a general sense of pleasure in the doing of something.  For example, I like to cook.  I like to cook a lot.  The kind of cooking I think is just fun makes other people--people who don't like to cook--cringe.  Like the cake I made this week--the star-spangled rhapsody from The Cake Bible: two layers of meringue, two layers of genoise flavored with a raspberry eau de vie syrup, whipped cream with berries and a mousseline buttercream, also flavored with raspberry liqueur and topped with fresh raspberries and blueberries. I love cooking--I ask for cooking tools for my birthday and Christmas, I take classes like sushi making and knife skills for fun.  I like reading about you can make an emulsion of egg yolks, melted butter and lemon, cook slowly and transform them into a sauce hollandaise (my use of all the egg yolks that didn't go into the aforementioned cake).  And while it's nice when people like what I cook, I don't do it for that.  I do it because I think it's fun.  My former boss, Ultan Ó Broin, loves to fish in the sea off the coast of Ireland.  Not because he gets prizes for it, or awards, but because it's fun.  To quote a note he sent me today when I asked if having been recently ill kept him from the beginning of mackerel season, he told me he had already been out and said "I can fish when on a deathbed" (read more of Ultan's work, see his blogs on User Assistance and Translation.). That's not the kind of intensity you get about something you don't like to do.  I'm sure you can think of something you do just because you like it. So how does that relate to gamification?  Gamification in the enterprise space is about uncovering the game within work.  Gamification is about tapping into things people already find motivating.  But to do that, you need to know what that user is motivated by. Customer Relationship Management (CRM) is one of those areas where over-the-top gamification seems to work (not to plug a competitor in this space, but you can search on what Bunchball* has done with a company just a little north of us on 101 for the CRM crowd).  Sales people are naturally competitive and thrive on that plus recognition of their sales work.  You can use lots of game mechanics like leaderboards and challenges and scorecards with this type of user and they love it.  Show my whole org I'm leading in sales for the quarter?  Bring it on!  However, take the average accountant and show how much general ledger activity they have done in the last week and expose it to their whole org on a leaderboard and I think you'd see a lot of people looking for a new job.  Why?  Because in general, accountants aren't extraverts who thrive on competition in their work.  That doesn't mean there aren't game mechanics that would work for them, but they won't be the same game mechanics that work for sales people.  It's a different type of user and they are motivated by different things. To break this up, I'll stop here and post now.  I'll pick this thread up in the next post. Thoughts? Questions? *Disclosure: To my knowledge, Oracle has no relationship with Bunchball at this point in time.

    Read the article

  • Using QT to build a WYSIWYG Editor for a Custom Markup Language

    - by Aaron
    I'm new to QT, and am trying to figure out the best means of creating a WYSIWYG editor widget for a custom markup language that displays simple text, images, and links. I need to be able to propagate changes from the WYSIWYG editor to the custom markup representation. As a concrete example of the problem domain, imagine that the custom markup might have a "player" tag which contains a player name and a team name. The markup could look like this: Last week, <player id="1234"><name>Aaron Rodgers</name><team>Packers</team></player> threw a pass. This text would display in the editor as: Last week, Aaron Rodgers of the Packers threw a pass. The player name and the team name would be editable directly within the editor in standard WYSIWYG fashion, so that my users do not have to learn any markup. Also, when the player name is moused-over, a details pop-up will appear about that player, and similarly for the team. With that long introduction, I'm trying to figure out where to start with QT. It seems that the most logical option would be the Rich Text API using a QTextDocument. This approach seems less than ideal given the limitations of a QTextDocument: I can't figure out how to capture navigation events from clicking on links. Following links on click seems to only be enabled when the QTextEdit is readonly. Custom objects that implement QTextObjectInterface are ignored in copy-and-paste operations Any HTML-based markup that is passed to it as Rich Text is retranslated into a series of span tags and lots of other junk, making it extremely difficult to propagate changes from the editor back to the original custom markup. A second option appears to be QWebKit, which allows for live editing of HTML5 markup, so I could specify a two-way translation between the custom markup and HTML5. I'm not clear on how one would propagate changes from the editor back to the original markup in real-time without re-translating the entire document on every text change. The QWebKit solutions looks like awfully bulky to me (Learning WebKit along with QT) to what should be a relatively simple problem. I have also considered implementing the WYSIWYG with a custom class using native QT containers, labels, images, and other widgets manually. This seems like the most flexible approach, and the one most likely not to run into unresolvable problems. However, I'm pretty sure that implementing all the details of a normal text editor (selecting text, font changes, cut-and-paste support, undo/redo, dragging of objects, cursor placement, etc.) will be incredibly time consuming. So, finally, my question: are there any QT gurus out there with some advice on where to start with this sort of project? BTW, I am using QT because the application is a desktop application that needs platform independence.

    Read the article

  • What'd be a good pattern on Doctrine to have multiple languages

    - by PERR0_HUNTER
    Hi! I have this challenge which consist in having a system that offers it's content in multiple languages, however a part of the data contained in the system is not translatable such as dates, ints and such. I mean if I have a content on the following YAML Corporativos: columns: nombre: type: string(254) notnull: true telefonos: type: string(500) email: type: string(254) webpage: type: string(254) CorporativosLang: columns: corporativo_id: type: integer(8) notnull: true lang: type: string(16) fixed: false ubicacion: type: string() fixed: false unsigned: false primary: false notnull: true autoincrement: false contacto: type: string() fixed: false unsigned: false primary: false notnull: true autoincrement: false tipo_de_hoteles: type: string(254) fixed: false unsigned: false primary: false notnull: true autoincrement: false paises: type: string() fixed: false unsigned: false primary: false notnull: true autoincrement: false relations: Corporativo: class: Corporativos local: corporativo_id foreign: id type: one foreignAlias: Data This would allow me to have different corporative offices, however the place of the corp, the contact and other things can be translate into a different language (lang) Now this code here would create me a brand new corporative office with 2 translations $corporativo = new Corporativos(); $corporativo->nombre = 'Duck Corp'; $corporativo->telefonos = '66303713333'; $corporativo->email = '[email protected]'; $corporativo->webpage = 'http://quack.com'; $corporativo->Data[0]->lang = 'en'; $corporativo->Data[0]->ubicacion = 'zomg'; $corporativo->Data[1]->lang = 'es'; $corporativo->Data[1]->ubicacion = 'zomg amigou'; the thing now is I don't know how to retrieve this data in a more friendly way, because if I'd like to access my Corporative info in english I'd had to run DQL for the corp and then another DQL for the specific translation in english, What I'd love to do is have my translatable fields available in the root so I could simply access them $corporativo = new Corporativos(); $corporativo->nombre = 'Duck Corp'; $corporativo->telefonos = '66303713333'; $corporativo->email = '[email protected]'; $corporativo->webpage = 'http://quack.com'; $corporativo->lang = 'en'; $corporativo->ubicacion = 'zomg'; this way the translatable fields would be mapped to the second table automatically. I hope I can explain my self clear :( any suggestions ?

    Read the article

  • PyGTK: dynamic label wrapping

    - by detly
    It's a known bug/issue that a label in GTK will not dynamically resize when the parent changes. It's one of those really annoying small details, and I want to hack around it if possible. I followed the approach at 16 software, but as per the disclaimer you cannot then resize it smaller. So I attempted a trick mentioned in one of the comments (the set_size_request call in the signal callback), but this results in some sort of infinite loop (try it and see). Does anyone have any other ideas? (You can't block the signal just for the duration of the call, since as the print statements seem to indicate, the problem starts after the function is left.) The code is below. You can see what I mean if you run it and try to resize the window larger and then smaller. (If you want to see the original problem, comment out the line after "Connect to the size-allocate signal", run it, and resize the window bigger.) The Glade file ("example.glade"): <?xml version="1.0"?> <glade-interface> <!-- interface-requires gtk+ 2.16 --> <!-- interface-naming-policy project-wide --> <widget class="GtkWindow" id="window1"> <property name="visible">True</property> <signal name="destroy" handler="on_destroy"/> <child> <widget class="GtkLabel" id="label1"> <property name="visible">True</property> <property name="label" translatable="yes">In publishing and graphic design, lorem ipsum[p][1][2] is the name given to commonly used placeholder text (filler text) to demonstrate the graphic elements of a document or visual presentation, such as font, typography, and layout. The lorem ipsum text, which is typically a nonsensical list of semi-Latin words, is a hacked version of a Latin text by Cicero, with words/letters omitted and others inserted, but not proper Latin[1][2] (see below: History and discovery). The closest English translation would be "pain itself" (dolorem = pain, grief, misery, suffering; ipsum = itself).</property> <property name="wrap">True</property> </widget> </child> </widget> </glade-interface> The Python code: #!/usr/bin/python import pygtk import gobject import gtk.glade def wrapped_label_hack(gtklabel, allocation): print "In wrapped_label_hack" gtklabel.set_size_request(allocation.width, -1) # If you uncomment this, we get INFINITE LOOPING! # gtklabel.set_size_request(-1, -1) print "Leaving wrapped_label_hack" class ExampleGTK: def __init__(self, filename): self.tree = gtk.glade.XML(filename, "window1", "Example") self.id = "window1" self.tree.signal_autoconnect(self) # Connect to the size-allocate signal self.get_widget("label1").connect("size-allocate", wrapped_label_hack) def on_destroy(self, widget): self.close() def get_widget(self, id): return self.tree.get_widget(id) def close(self): window = self.get_widget(self.id) if window is not None: window.destroy() gtk.main_quit() if __name__ == "__main__": window = ExampleGTK("example.glade") gtk.main()

    Read the article

  • How to use ResourceManager in a "website" mode ?

    - by Erick
    I am trying here to do a manual translation for the application I am working with. (There is already a working LocalizationModule but it's working dodgy, so I can't use <asp:Localize /> tags. Normally with ResourceManager you are supposed to be using it as Namespace.Folder.Resourcename (in an application). Currently I am translating an existing asp.net "website" (not web application so no namespace here....). The resources are located into a folder name "Locales/resources" which contains "fr-ca.resx" and "en-us.resx". So I used a code with something like this : public static string T(string search) { System.Resources.ResourceManager resMan = new System.Resources.ResourceManager( "Locales", System.Reflection.Assembly.GetExecutingAssembly(), null ); var text = resMan.GetString(search, System.Threading.Thread.CurrentThread.CurrentCulture); if (text == null) return "null"; else if (text == string.Empty) return "empty"; else return text; } and inside the page I have something like this <%= Locale.T("T_HOME") %> When I refresh I have this : Could not find any resources appropriate for the specified culture or the neutral culture. Make sure "Locales.resources" was correctly embedded or linked into assembly "App_Code.9yopn1f7" at compile time, or that all the satellite assemblies required are loadable and fully signed. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Resources.MissingManifestResourceException: Could not find any resources appropriate for the specified culture or the neutral culture. Make sure "Locales.resources" was correctly embedded or linked into assembly "App_Code.9yopn1f7" at compile time, or that all the satellite assemblies required are loadable and fully signed. Source Error: Line 14: System.Resources.ResourceManager resMan = new System.Resources.ResourceManager( "Locales", System.Reflection.Assembly.GetExecutingAssembly(), null ); Line 15: Line 16: var text = resMan.GetString(search, System.Threading.Thread.CurrentThread.CurrentCulture); Line 17: Line 18: if (text == null) Source File: c:\inetpub\vhosts\galerieocarre.com\subdomains\dev\httpdocs\App_Code\Locale.cs Line: 16 I even tried to load the resource with Locales.fr-ca or only fr-ca nothing quite work here.

    Read the article

  • PHP-based LaTeX parser -- where to begin?

    - by Alex Basson
    The project: I want to build a LaTeX-to-MathML translator in PHP. Why? Because I'm a mathematician, and I want to publish math on my Drupal site. It doesn't have to translate all of LaTeX, since the basic document-level stuff is ably handled by the CMS and wouldn't be written in LaTeX to begin with; it just has to translate math written in LaTeX into math written in MathML. Although I feel as though I've done my due diligence, this doesn't seem to exist already. Maybe I'm wrong---if you know of something that would serve this purpose, by all means let me know, and thank you in advance. But assuming it doesn't exist, I guess I have to go write it myself. Here's the thing, though: I've never done anything this ambitious. I don't really know where to begin. I've used PHP for years, but just to do the standard "build a CMS with PHP and MySQL"-type of stuff. I've never attempted anything as seemingly sophisticated as translation from one language to another. I'm just dumb enough to consider doing it with regex---after all, LaTeX is a much more formal language, and it doesn't allow for nearly the kinds of pathological edge-cases, as say, HTML. But on the other hand, I'm just smart enough to realize this is probably a terrible idea: now I have two problems, and I sure don't want to end up like this guy. So if that's not the way to go (right?), what is? How should I start thinking about this problem? Am I essentially writing a LaTeX compiler in PHP, and if so, what do I need to know to do that (like, should I just go read the Purple Dragon book first?)? I'm both really excited and pretty intimidated by the prospect of this project, but hey, this is how we all learn to be programmers, right? If something we need doesn't exist, we go and build it, necessity is the mother of... you get the point. Tremendous thanks to everyone in advance for any and all guidance you can offer.

    Read the article

  • No norwegian characters in LaTeX

    - by DreamCodeR
    Hi, I have translated a document from English to Norwegian in the LaTeX format, and while using norwegian special characters, I get an error using \usepackage[utf8x]{inputenc} to try and display the norwegian (scandinavian) special characters in PostScript/PDF/DVI format, saying Package utf8x Error: MalformedUTF-8sequence. So while that didn't work, I tried out another possible solution: \usepackage{ucs} \usepackage[norsk]babel And when I tried to save that in Emacs I get this message: These default coding systems were tried to encode text in the buffer `lol.tex': (utf-8-unix (905 . 4194277) (916 . 4194245) (945 . 4194278) (950 . 4194277) (954 . 4194296) (990 . 4194277) (1010 . 4194277) (1013 . 4194278) (1051 . 4194277) (1078 . 4194296) (1105 . 4194296)) However, each of them encountered characters it couldn't encode: utf-8-unix cannot encode these: \345 \305 \346 \345 \370 \345 \345 \346 \345 \370 ... Thanks to Emacs I have the possibility to check out the properties of those characters and the first one tells me: character: \345 (4194277, #o17777745, #x3fffe5) preferred charset: eight-bit (Raw bytes 128-255) code point: 0xE5 syntax: w which means: word buffer code: #xE5 file code: not encodable by coding system utf-8-unix display: not encodable for terminal Which doesn't tell me much. When I try to build this with texi2dvi --dvipdf filename.text I get a perfectly fine PDF, all without the special norwegian characters. When I am about to save Emacs also ask me: "Select coding system (default raw-text):" And I type in utf-8 to choose its coding system. I have also tried to choose default raw-text to see if I get some different result. But nothing. At last I tried \lstset{inputencoding=utf8x, extendedchars=\true} ... a code I came over while trying to google the solution to this problem. Which gives me this error: Undefined control sequence. So basically, I have tried every encoding option I have been able to find and nothing works. I am desperately trying to make this work since the norwegian translation must be published before the deadline. As an additional information I may add that I found out later on that I only had the en_US.UTF-8 in my locale, so I added nb_NO.UTF-8 and nb_NO.ISO-8859-15 and ran locale-gen + reboot without any changes. I hope I provided enough information to get some assistance, the characters in question is æ ø å.

    Read the article

  • multiple definition of inline function

    - by K71993
    Hi, I have gone through some posts related to this topic but was not able to sort out my doubt completly. This might be a very navie question. Code Description I have a header file "inline.h" and two translation unit "main.cpp" and "tran.cpp". Details of code are as below inline.h file details #ifndef __HEADER__ #include <stdio.h> extern inline int func1(void) { return 5; } static inline int func2(void) { return 6; } inline int func3(void) { return 7; } #endif main.c file details are below #define <stdio.h> #include <inline.h> int main(int argc, char *argv[]) { printf("%d\n",func1()); printf("%d\n",func2()); printf("%d\n",func3()); return 0; } tran.cpp file details (Not that the functions are not inline here) #include <stdio.h> int func1(void) { return 500; } int func2(void) { return 600; } int func3(void) { return 700; } Question The above code does not compile in gcc compiler whereas compiles in g++ (Assuming you make changes related to gcc in code like changing the code to .c not using any C++ header files... etc). The error displayed is "duplicate definition of inline function - func3". Can you clarify why this difference is present across compile? When you run the program (g++ compiled) by creating two seperate compilation unit (main.o and tran.o and create an executable a.out), the output obtained is 500 6 700 Why does the compiler pick up the definition of the function which is not inline. Actually since #include is used to "add" the inline definiton I had expected 5,6,7 as the output. My understanding was during compilation since the inline definition is found, the function call would be "replaced" by inline function definition. Can you please tell me in detailed steps the process of compilation and linking which would lead us to 500,6,700 output. I can only understand the output 6. Thanks in advance for valuable input.

    Read the article

  • Opinion on "loop invariants", and are these frequently used in the industry?

    - by Michael Aaron Safyan
    I was thinking back to my freshman year at college (five years ago) when I took an exam to place-out of intro-level computer science. There was a question about loop invariants, and I was wondering if loop invariants are really necessary in this case or if the question was simply a bad example... the question was to write an iterative definition for a factorial function, and then to prove that the function was correct. The code that I provided for the factorial function was as follows: public static int factorial(int x) { if ( x < 0 ){ throw new IllegalArgumentException("Parameter must be = 0"); }else if ( x == 0 ){ return 1; }else{ int result = 1; for ( int i = 1; i <= x; i++ ){ result*=i; } return result; } } My own proof of correctness was a proof by cases, and in each I asserted that it was correct by definition (x! is undefined for negative values, 0! is 1, and x! is 1*2*3...*x for a positive value of x). The professor wanted me to prove the loop using a loop invariant; however, my argument was that it was correct "by definition", because the definition of "x!" for a positive integer x is "the product of the integers from 1... x", and the for-loop in the else clause is simply a literal translation of this definition. Is a loop invariant really needed as a proof of correctness in this case? How complicated must a loop be before a loop invariant (and proper initialization and termination conditions) become necessary for a proof of correctness? Additionally, I was wondering... how often are such formal proofs used in the industry? I have found that about half of my courses are very theoretical and proof-heavy and about half are very implementation and coding-heavy, without any formal or theoretical material. How much do these overlap in practice? If you do use proofs in the industry, when do you apply them (always, only if it's complicated, rarely, never)?

    Read the article

  • How to define paper in raphael JS liberary?

    - by cj333
    Hi, I want to learn raphael JS liberary to draw a square. I copied the official code, but it is not work, "paper is not defined on line 34". how to define it? The demo is on http://www.irunmywebsite.com/raphael/additionalhelp.php the left menu "animate" ,Thanks. <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> <title>Raphaël · Gear</title> <style type="text/css"> body { background: #333; color: #fff; font: 300 100.1% "Helvetica Neue", Helvetica, "Arial Unicode MS", Arial, sans-serif; } #holder { height: 480px; left: 50%; margin: -240px 0 0 -320px; position: absolute; top: 50%; width: 640px; } #copy { bottom: 0; font: 300 .7em "Helvetica Neue", Helvetica, "Arial Unicode MS", Arial, sans-serif; position: absolute; right: 1em; text-align: right; } #copy a { color: #fff; } </style> <script src="raphael-min.js" type="text/javascript" charset="utf-8"></script> <script type="text/javascript" charset="utf-8"> var a = paper.circle(320, 100, 60).attr({fill:'#FF0000'}); a.animate({ 'translation': '0,300' }, 500, 'bounce'); var b = paper.circle(320, 100, 60).attr({fill:'#FFFF00'});; b.animate({ cx: 320, cy: 300 }, 500, 'bounce'); var path1=paper.path("M114 253").attr({"stroke": "#f00", "stroke-width":3}); path1.animate({path: "M114 253 L 234 253"},3000,function(){ var path2=paper.path("M234 253").attr({"stroke": "#f00","stroke-width":3}); path2.animate({path: "M234 253 L 234 134"},3000,function(){ var path3=paper.path("M234 134").attr({"stroke": "#f00","stroke-width":3}); path3.animate({path: "M234 134 L 97 134"},3000); }); }); </script> </head> <body> <div id="stroke"></div> </body> </html>

    Read the article

  • Android - OPENGL cube is NOT in the display

    - by Marc Ortiz
    I'm trying to display a square on my display and i can't. Whats my problem? How can I display it on the screen (center of the screen)? I let my code below! Here's my render class: public class GLRenderEx implements Renderer { private GLCube cube; Context c; GLCube quad; // ( NEW ) // Constructor public GLRenderEx(Context context) { // Set up the data-array buffers for these shapes ( NEW ) quad = new GLCube(); // ( NEW ) } // Call back when the surface is first created or re-created. @Override public void onSurfaceCreated(GL10 gl, EGLConfig config) { // NO CHANGE - SKIP } // Call back after onSurfaceCreated() or whenever the window's size changes. @Override public void onSurfaceChanged(GL10 gl, int width, int height) { // NO CHANGE - SKIP } // Call back to draw the current frame. @Override public void onDrawFrame(GL10 gl) { // Clear color and depth buffers using clear-values set earlier gl.glClear(GL10.GL_COLOR_BUFFER_BIT | GL10.GL_DEPTH_BUFFER_BIT); gl.glLoadIdentity(); // Reset model-view matrix ( NEW ) gl.glTranslatef(-1.5f, 0.0f, -6.0f); // Translate left and into the // screen ( NEW ) // Translate right, relative to the previous translation ( NEW ) gl.glTranslatef(3.0f, 0.0f, 0.0f); quad.draw(gl); // Draw quad ( NEW ) } } And here is my square class: public class GLCube { private FloatBuffer vertexBuffer; // Buffer for vertex-array private float[] vertices = { // Vertices for the square -1.0f, -1.0f, 0.0f, // 0. left-bottom 1.0f, -1.0f, 0.0f, // 1. right-bottom -1.0f, 1.0f, 0.0f, // 2. left-top 1.0f, 1.0f, 0.0f // 3. right-top }; // Constructor - Setup the vertex buffer public GLCube() { // Setup vertex array buffer. Vertices in float. A float has 4 bytes ByteBuffer vbb = ByteBuffer.allocateDirect(vertices.length * 4); vbb.order(ByteOrder.nativeOrder()); // Use native byte order vertexBuffer = vbb.asFloatBuffer(); // Convert from byte to float vertexBuffer.put(vertices); // Copy data into buffer vertexBuffer.position(0); // Rewind } // Render the shape public void draw(GL10 gl) { // Enable vertex-array and define its buffer gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); gl.glVertexPointer(3, GL10.GL_FLOAT, 0, vertexBuffer); // Draw the primitives from the vertex-array directly gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); } } Thanks!!

    Read the article

  • How to copy DispatcherObject (BitmapSource) into different thread?

    - by Tomáš Kafka
    Hi, I am trying to figure out how can I copy DispatcherObject (in my case BitmapSource) into another thread. Use case: I have a WPF app that needs to show window in a new thread (the app is actually Outlook addin and we need to do this because Outlook has some hooks in the main UI thread and is stealing certain hotkeys that we need to use - 'lost in translation' in interop of Outlook, WPF (which we use for UI), and Winforms (we need to use certain microsoft-provided winforms controls)). With that, I have my implementation of WPFMessageBox, that is configured by setting some static properties - and and one of them is BitmapSource for icon. This is used so that in startup I can set WPFMessageBox.Icon once, and since then, every WPFMessageBox will have the same icon. The problem is that BitmapSource, which is assigned into icon, is a DispatcherObject, and when read, it will throw InvalidOperationException: "The calling thread cannot access this object because a different thread owns it.". How can I clone that BitmapSource into the actual thread? It has Clone() and CloneCurrentValue() methods, which don't work (they throw the same exception as well). It also occured to me to use originalIcon.Dispatcher.Invoke( do the cloning here ) - but the BitmapSource's Dispatcher is null, and still - I'd create a copy on a wrong thread and still couldnt use it on mine. BitmapSource.IsFrozen == true. Any idea on how to copy the BitmapSource into different thread (without completely reconstructing it from an image file in a new thread)? EDIT: So, freezing does not help: In the end I have a BitmapFrame (Window.Icon doesn't take any other kind of ImageSource anyway), and when I assign it as a Window.Icon on a different thread, even if frozen, I get InvalidOperationException: "The calling thread cannot access this object because a different thread owns it." with a following stack trace: WindowsBase.dll!System.Windows.Threading.Dispatcher.VerifyAccess() + 0x4a bytes WindowsBase.dll!System.Windows.Threading.DispatcherObject.VerifyAccess() + 0xc bytes PresentationCore.dll!System.Windows.Media.Imaging.BitmapDecoder.Frames.get() + 0xe bytes PresentationFramework.dll!MS.Internal.AppModel.IconHelper.GetIconHandlesFromBitmapFrame(object callingObj = {WPFControls.WPFMBox.WpfMessageBoxWindow: header}, System.Windows.Media.Imaging.BitmapFrame bf = {System.Windows.Media.Imaging.BitmapFrameDecode}, ref MS.Win32.NativeMethods.IconHandle largeIconHandle = {MS.Win32.NativeMethods.IconHandle}, ref MS.Win32.NativeMethods.IconHandle smallIconHandle = {MS.Win32.NativeMethods.IconHandle}) + 0x3b bytes > PresentationFramework.dll!System.Windows.Window.UpdateIcon() + 0x118 bytes PresentationFramework.dll!System.Windows.Window.SetupInitialState(double requestedTop = NaN, double requestedLeft = NaN, double requestedWidth = 560.0, double requestedHeight = NaN) + 0x8a bytes PresentationFramework.dll!System.Windows.Window.CreateSourceWindowImpl() + 0x19b bytes PresentationFramework.dll!System.Windows.Window.SafeCreateWindow() + 0x29 bytes PresentationFramework.dll!System.Windows.Window.ShowHelper(object booleanBox) + 0x81 bytes PresentationFramework.dll!System.Windows.Window.Show() + 0x48 bytes PresentationFramework.dll!System.Windows.Window.ShowDialog() + 0x29f bytes WPFControls.dll!WPFControls.WPFMBox.WpfMessageBox.ShowDialog(System.Windows.Window owner = {WPFControlsTest.MainWindow}) Line 185 + 0x10 bytes C#

    Read the article

  • java spring and ftl

    - by bobby
    I defined a modelview object named "buttonpressed" in spring controller file and I need to access that modelview object in ftl(freemarker) file which is being returned as a view from a controller like abcd.java The abcd.java controller code is as below if (questionAnswer.getAnswerId() == 1045) { modelAndView.addObject("buttonPressed","You have been added in mailing list"); modelAndView.setViewName("enterCode_nextSteps"); } else { modelAndView.addObject("buttonPressed","Not added in the mailing list"); modelAndView.setViewName("enterCode_nextSteps"); } the below ajax function is working fine currently but I am not sure how to access this object called "buttonpressed" in this ajax function. I have written like the way mentioned below but when i am clicking the submit link its not calling "partner.do" and also throwing error saying #buttonPressed is undefind (but in the script below its working fine and calling "partner.do" and even posting data) So is that problem coming from javsscript code i mean due to incorrect use of "buttonPressed" or might be problem from spring controller abcd.java file. <div class="partnerOptInBox"> <div id="optInContent"> <form name="partnerOptIn"> <h4>Want the Latest</h4> <p class="pad10Top">${partnerOpt.translation}</p> <div class="pad10Top"> <input type="radio" name="questionAnswer['${partnerOpt.questionId}']" value="${partnerOpt.getAnswers()[0].answerId}" class="radioButton" /> <label for="questionAnswer['${partnerOpt.questionId}']" class="formLabel pad20Right">Yes</label> <input type="radio" name="questionAnswer['${partnerOpt.questionId}']" class="radioButton" value="${partnerOpt.getAnswers()[1].answerId}" /> <label for="questionAnswer['${partnerOpt.questionId}']" class="formLabel">No</label> </div> <div id="optInError" class="formError" style="display:none;">Oops... your request did not go through, please try again.</div> <div class="pad15Top"> <a href="javascript:submitOptIn();"><img src="images/theme/btn_opt_in_submit.gif"/></a> </div> </form> <script type="text/javascript"> function submitOptIn() { $('optInError').hide(); dataString = $('#partnerOptIn').serialize(); $.ajax({ data: dataString, url: "partnerOpt.do", timeout: 30000, type: "POST", success: function(html){ var newHtml = "<h4>Thank you</h4><p>We appreciate your time to respond to our request.</p>"; $('#optInContent').html(newHtml); }, */trying this code for sucess is throwing me an error /* <!-- buttonpressed function--> success: function(html){ $('#optInContent').html(${buttonPressed}); }, <!-- buttonpressed function--> error: function(){ $('#optInError').show(); } }); } </script>

    Read the article

< Previous Page | 37 38 39 40 41 42 43 44 45 46  | Next Page >