Search Results

Search found 13318 results on 533 pages for 'svn config'.

Page 463/533 | < Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >

  • Importing hibernate configuration file into Spring applicationContext

    - by Himanshu Yadav
    I am trying to integrate Hibernate 3 with Spring 3.1.0. The problem is that application is not able to find mapping file which declared in hibernate.cfg.xml file. Initially hibernate configuration has datasource configuration, hibernate properties and mapping hbm.xml files. Master hibernate.cfg.xml file exist in src folder. this is how Master file looks: <hibernate-configuration> <session-factory> <!-- Mappings --> <mapping resource="com/test/class1.hbm.xml"/> <mapping resource="/class2.hbm.xml"/> <mapping resource="com/test/class3.hbm.xml"/> <mapping resource="com/test/class4.hbm.xml"/> <mapping resource="com/test/class5.hbm.xml"/> Spring config is: <bean id="sessionFactoryEditSolution" class="org.springframework.orm.hibernate3.LocalSessionFactoryBean"> <property name="dataSource" ref="data1"/> <property name="mappingResources"> <list> <value>/master.hibernate.cfg.xml</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">org.hibernate.dialect.SQLServerDialect</prop> <prop key="hibernate.cache.use_second_level_cache">true</prop> </props> </property> </bean>

    Read the article

  • Loader.php trying to load Doctrine classes, but we use Propel!

    - by kewpiedoll99
    We are finding cases where we get the following 500 error: File xyz.php does not exist or class "xyz" was not found in the file at () in SF_ROOT_DIR/lib/vendor/Zend/Loader.php line 107 ... where xyz == Memcache (when trying to use symfony cc on the command line) or sfDoctrineAdminGenerator (when using an old-ish AdminGenerator-generated CMS page). We use Propel, but Loader.php is trying to load classes used only for Doctrine. Currently I am using a filthy hack where I request Loader.php to check if the file is either of these two cases, and if so simply return rather than trying to load it. Obviously, this is unacceptable longer term. Has anybody encountered this, and how did you solve it? Edited to add: We have: class ProjectConfiguration extends sfProjectConfiguration { public function setup() { // for compatibility / remove and enable only the plugins you want $this->enableAllPluginsExcept(array('sfDoctrinePlugin')); } } And we have a propel.ini file in our top level config directory. This has only started in the past four weeks or so, and we've had a stable build for over a year now. I'm pretty sure Doctrine is totally disabled.

    Read the article

  • Hadoop reduce task gets hung

    - by user806098
    I set up a hadoop cluster with 4 nodes, When running a map-reduce task, the map task finishes quickly, while the reduce task hangs at 27% percent. I checked the log, it's that the reduce task fails to fetch map output from map nodes. The job tracker log of master shows messages like this: 2011-06-27 19:55:14,748 INFO org.apache.hadoop.mapred.JobTracker: Adding task (REDUCE) 'attempt_201106271953_0001_r_000000_0' to tip task_201106271953_0001_r_000000, for tracker 'tracker_web30.bbn.com.cn:localhost/127.0.0.1:56476' And the name node log of master shows messages like this: 2011-06-27 14:00:52,898 INFO org.apache.hadoop.ipc.Server: IPC Server handler 4 on 54310, call register(DatanodeRegistration(202.106.199.39:50010, storageID=DS-1989397900-202.106.199.39-50010-1308723051262, infoPort=50075, ipcPort=50020)) from 192.168.225.19:16129: error: java.io.IOException: verifyNodeRegistration: unknown datanode 202.106.199.3 9:50010 However, neither the "web30.bbn.com.cn" or 202.106.199.39, 202.106.199.3 is the slave node. I think such ip/domains appear because hadoop fails to resolve a node(first in the Intranet DNS server), then it goes to a higher-level DNS server, later to the top, still fails, then the "junk" ip/domains are returned. But I checked my config, it goes like this: /etc/hosts: 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 192.168.225.16 master 192.168.225.66 slave1 192.168.225.20 slave5 192.168.225.17 slave17 conf/core-site.xml: hadoop.tmp.dir /root/hadoop_tmp/hadoop_${user.name} fs.default.name hdfs://master:54310 io.sort.mb 1024 hdfs-site.xml: dfs.replication 3 masters: master slaves: master slave1 slave5 slave17 Also, all firewalls(iptables) are turned off, and ssh between each 2 nodes is ok. so I don't know where exact the error comes from. Please help. Thanks a lot.

    Read the article

  • [Qt] How to get rid of OCI.dll dependency when compiling static

    - by STL
    Hi, My application accesses an Oracle database through Qt's QSqlDatabase class. I'm compiling Qt as static for the release build, but I can't seem to be able to get rid of OCI.dll dependency. I'm trying to link against oci.lib (as available in Oracle's Instant Client with SDK). Here's my configure line : configure -qt-libjpeg -qt-zlib -qt-libpng -nomake examples -nomake demos -no-exceptions -no-stl -no-rtti -no-qt3support -no-scripttools -no-openssl -no-opengl -no-phonon -no-style-motif -no-style-cde -no-style-cleanlooks -no-style-plastique -static -release -opensource -plugin-sql-oci -plugin-sql-sqlite -platform win32-msvc2005 I link against oci.h and oci.lib in the SDK's folder by using : set INCLUDE=C:\oracle\instantclient\sdk\include;%INCLUDE% set LIB=C:\oracle\instantclient\sdk\lib\msvc;%LIB% Then, once Qt is compiled, I use the following lines in my *.pro file : QT += sql CONFIG += static LIBS += C:\oracle\instantclient\sdk\lib\msvc\oci.lib QTPLUGIN += qsqloci Then, in my main.cpp, I add the following commands to statically compile OCI plugin in the application : #include <QtPlugin> Q_IMPORT_PLUGIN(qsqloci) After compiling the project, I test it on my workstation and it works (as I have Oracle Instant Client installed). When I try on another workstation, I always get the message: This application has failed to start because OCI.dll was not found. Re-installing this application may fix this problem. I don't understand why I still need OCI.dll, as my statically linked application is supposed to link to oci.lib instead. Is there any Qt people here that might have a solution for me ? Thanks a lot ! STL

    Read the article

  • PHP: Profiling code and strict environment ~ Improving my coding

    - by DavidYell
    I would like to update my local working environment to be stricter in an effort to improve my code. I know that my code is okay, but as with most things there is always room for improvement. I use XAMPP on my local machine, for simplicities sake Apache Friends XAMPP (Basic Package) version 1.7.2 So I've updated my php.ini : error_reporting to be E_ALL | E_STRICT to help with the code standard. I've also enabled the XDebug extension zend_extension = "C:\xampp\php\ext\php_xdebug.dll" which seems to be working, having tested some broken code and got the nice standard orange error notice. However, having read this question, http://stackoverflow.com/questions/133686/what-is-the-best-way-to-profile-php-code and enabled the profiler, I cannot seem to create a cachegrind file. Many of the guides that I've looked at seem to think you need to install XDebug in XAMPP which leads me to think they are out of date, as XDebug is bundled with XAMPP these days. So I would appreciate it if anyone can help point me in the right direction with both configuring XDebug to output grind files, and or just a great set of default settings for the XDebug config in XAMPP. Seems there is very little documentation to go on. If people have tips on integrating these tools with Netbeans, that would be awesomesauce. I'm happy to get suggestions on other things that I can do to help tighten up my php code, both syntactically and performance wise Thanks, and apologies for the rambling question(s)! Ninja edit I should menion that I'm using named vhosts as my Apache configuration, which I think is why running XDebug on port 9000 isn't working for me. I guess I'd need to edit my vhost to include port 9000

    Read the article

  • Why are Managed Beans not loaded in Tomcat?

    - by c0d3x
    Hi, I created a JSF 2 web application with facelets. The libs for JSF where stored at tomcat/lib, to share it between several applications. I thought maybe it would be better to store the libs inside the WEB-INF/lib folder of the application, to get the application more independent from server configurations. Now when I start tomcat via eclipse, the managed beans are loaded and working. But when I start tomcat directly / standalone the managed beans are not loaded automatically. I used @ManagedBean @SessionScoped / @RequestScoped annotations to declare classes as managed beans. Why is this? What can I do to fix it? I don't use any faces-config.xml file yet. Thanks in advance. edited: Maybe this helps to see whats going on: javax.el.PropertyNotFoundException: /Artikel.xhtml @12,108 value="#{artikelBackingBean.nameFilterPattern}": Target Unreachable, identifier 'artikelBackingBean' resolved to null at com.sun.faces.facelets.el.TagValueExpression.getType(TagValueExpression.java:93) at com.sun.faces.renderkit.html_basic.HtmlBasicInputRenderer.getConvertedValue(HtmlBasicInputRenderer.java:95) at javax.faces.component.UIInput.getConvertedValue(UIInput.java:1008) at javax.faces.component.UIInput.validate(UIInput.java:934) at javax.faces.component.UIInput.executeValidate(UIInput.java:1189) at javax.faces.component.UIInput.processValidators(UIInput.java:691) at javax.faces.component.UIForm.processValidators(UIForm.java:243) at javax.faces.component.UIComponentBase.processValidators(UIComponentBase.java:1080) at javax.faces.component.UIViewRoot.processValidators(UIViewRoot.java:1180) at com.sun.faces.lifecycle.ProcessValidationsPhase.execute(ProcessValidationsPhase.java:76) at com.sun.faces.lifecycle.Phase.doPhase(Phase.java:101) at com.sun.faces.lifecycle.LifecycleImpl.execute(LifecycleImpl.java:118) at javax.faces.webapp.FacesServlet.service(FacesServlet.java:312) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:433) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:128) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:849) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:583) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:454) at java.lang.Thread.run(Thread.java:619)

    Read the article

  • Git submodules not updating?

    - by DavidYell
    I have a project in which I've included some libraries as submodules. They work fine on the machine that you add them on, but when I get home and checkout the repo, I get the folders for the submodules but they are empty. .gitmodules Neon@Neon-PC /cygdrive/c/xampp/htdocs/learning-lithium $ cat .gitmodules [submodule "libraries/lithium"] path = libraries/lithium url = git://github.com/UnionOfRAD/lithium.git [submodule "app/webroot/css/elements"] path = app/webroot/css/elements url = https://github.com/dmitryf/elements.git [submodule "app/libraries/li3_markdown"] path = app/libraries/li3_markdown url = https://github.com/sandelius/li3_markdown.git [submodule "app/webroot/markitup"] path = app/webroot/markitup url = https://github.com/markitup/1.x.git Config and status commands Neon@Neon-PC /cygdrive/c/xampp/htdocs/learning-lithium $ git submodule -af14f48b419310935446176290e1f9dc641400e0 app/libraries/li3_markdown -ebdcd8ca09c874f5e2ef81ec198cc441f37a4f74 app/webroot/css/elements -328291e49a3c7e1fb76b3342f112734864836205 app/webroot/markitup -4980010526d05c556c496ff63951da31828c5943 libraries/lithium Neon@Neon-PC /cygdrive/c/xampp/htdocs/learning-lithium $ git submodule update Neon@Neon-PC /cygdrive/c/xampp/htdocs/learning-lithium $ git submodule status -af14f48b419310935446176290e1f9dc641400e0 app/libraries/li3_markdown -ebdcd8ca09c874f5e2ef81ec198cc441f37a4f74 app/webroot/css/elements -328291e49a3c7e1fb76b3342f112734864836205 app/webroot/markitup -4980010526d05c556c496ff63951da31828c5943 libraries/lithium I added these as you would normally with, git submodule add <repo> <path> git submodule init The submodules are hosted on Github and my repo is hosted on Bitbucket, although I'm not sure if this is relevant.

    Read the article

  • Microsoft products such as Visual Studio 2010 does not require to enter serial number

    - by MainMa
    Hi, I am member of WebsiteSpark and was member of DreamSpark. Both programs enable to download software and provide serial keys to use. Some software like Windows Server has an ISO file to download and a serial number displayed on the website which I must enter during installation. Some other software does not have any serial key. For example, when I downloaded Visual Studio 2010, there was just a link to an ISO file. During installation, there was no such a field as serial number (whereas Visual Studio 2008 had this field at the beginning of installation process). There is the same thing with SQL Server 2008 and Microsoft Expression Studio 3. Even when I've downloaded the public trial RTM version of Windows Seven Enterprise, there were no serial number to enter. I don't think that such expensive products as SQL Server 2008 Enterprise are delivered without serials and online validation, so I suppose that the serial is embedded into the product itself, either in installation binaries or in a separate config file, so is already in the ISO I download so I do not have to enter it. So my question is, how it is done technically? Is each 2 GBs ISO generated on-demand on the server to embed a serial each time this ISO is requested? I suppose that if it is done, it has a huge impact on servers performance (no caching, no streaming...), so what may be the techniques used behind? I want to implement the same feature in a product I intend to ship (to simplify installation by avoiding to ask to enter serial number), but I really don't see how to do it with low impact on server performance.

    Read the article

  • User Friendly Video Review with Locking

    - by James Cori
    We have a jquery/php/mysql system that allows a user to log in and review videos built by a system for online viewing. When a user begins reviewing a video, the video is marked as such. But now we've cornered ourselves into the classic browser-based application problem of the user navigating away or closing the browser without completing review. That video would then enter a state of limbo of constantly being reviewed, but never completed, and never re-entering the queue. Options we have are: Build a service (which we already have others) to find review sessions that are outside a duration boundary and reset them back into the queue. Reset review sessions outside a duration boundary when that user logs in. Essentially, if a user locks out a video for review, it'll be unlocked the next time they log in. A suggestion made to me was to use the php/apache session length and on expiration, reset any pending review jobs. I don't even know where to look to implement this as this is one project on a shared server, so it shouldn't be an apache config, but the reset mechanism would need to know the database credentials to be able to reset it... The worst solution everyone hates is preventing the user from navigating away with javascript, asking "Are you sure?!" This system is used by a few hired reviewers, so I'm not exactly dealing with the public here, but I can't prevent users from sharing logins for speedier review, which would knock out the 2nd option above because it would unlock a video being reviewed by someone else using the same login.

    Read the article

  • spring.net proxy factory with target type needs property virtual ?

    - by Vince
    Hi all, I'm creating spring.net proxy in code by using ProxyFactory object with ProxyTargetType to true to have a proxy on a non interfaced complex object. Proxying seems ok till i call a method on that object. The method references a public property and if this property is not virtual it's value is null. This doesn't happen if i use Spring.Aop.Framework.AutoProxy.InheritanceBasedAopConfigurer in spring config file but in this case i can't use this because spring context doesn't own this object. Is this normal to have such behavior or is there a tweak to perform what i want (proxying object virtual method without having to change properties virtual)? Note that i tried factory.AutoDetectInterfaces and factory.ProxyTargetAttributes values but doesn't help. My proxy creation code: public static T CreateMethodCallStatProxy<T>() { // Proxy factory ProxyFactory factory = new ProxyFactory(); factory.AddAdvice(new CallMonitorTrackerAdvice()); factory.ProxyTargetType = true; // Create instance factory.Target = Activator.CreateInstance<T>(); // Get proxy T proxiedClass = (T)factory.GetProxy(); return proxiedClass; } Thanks for your help

    Read the article

  • What's wrong with my code? (pdcurses/getmaxyx)

    - by flarn2006
    It gives me an access violation on the getmaxyx line (second line in the main function) and also gives me these two warnings: LINK : warning LNK4049: locally defined symbol "_stdscr" imported LINK : warning LNK4049: locally defined symbol "_SP" imported Yes, it's the same code as in another question I asked, it's just that I'm making it more clear. And yes, I have written programs with pdcurses before with no problems. #include <time.h> #include <curses.h> #include "Ball.h" #include "Paddle.h" #include "config.h" int main(int argc, char *argv[]) { int maxY, maxX; getmaxyx(stdscr, maxY, maxX); Paddle *paddleLeft = new Paddle(0, KEY_L_UP, KEY_L_DOWN); Paddle *paddleRight = new Paddle(maxX, KEY_R_UP, KEY_R_DOWN); Ball *ball = new Ball(paddleLeft, paddleRight); int key = 0; initscr(); cbreak(); noecho(); curs_set(0); while (key != KEY_QUIT) { key = getch(); paddleLeft->OnKeyPress(key); paddleRight->OnKeyPress(key); } endwin(); return 0; }

    Read the article

  • problem loading texture with transparency with OpenGL ES and Android

    - by Evan Kimia
    Im trying to load an image that has background transparency that will be layered over another texture. When i try and load it, all i get is a white screen. The texture is 512 by 512, and its saved in photoshop as a 24 bit PNG (standard PNG specs in the Photoshop Save for Web and Devices config window). Any idea why its not showing? The texture without transparency shows without a problem. Here is my loadTextures method: public void loadGLTexture(GL10 gl, Context context) { //Get the texture from the Android resource directory Bitmap bitmap = BitmapFactory.decodeResource(context.getResources(), R.drawable.m1); Bitmap normalScheduleLines = BitmapFactory.decodeResource(context.getResources(), R.drawable.m1n); //Generate texture pointers... gl.glGenTextures(3, textures, 0); //...and bind it to our array gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[1]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR_MIPMAP_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL11.GL_TRUE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); //Bind our normal schedule bus map lines gl.glBindTexture(GL10.GL_TEXTURE_2D, textures[0]); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR_MIPMAP_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); gl.glTexParameterf(GL11.GL_TEXTURE_2D, GL11.GL_GENERATE_MIPMAP, GL11.GL_TRUE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_CLAMP_TO_EDGE); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_CLAMP_TO_EDGE); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, GL10.GL_RGBA, normalScheduleLines, 0); normalScheduleLines.recycle(); }

    Read the article

  • jquery ajax post and response evaluation

    - by Rami
    Hello people, i am relatively new to javascript and jquery in particular, so please bear with me, i am trying to loop through multiple s and then serialize() the data with jquery and post it using ajax to my page, this's happening alright, and the data is posted, and my php script echos 1 and everything is taken care off, but for some strange reason, the following code is not working, specially the "success" variable, it's not increasing at all! would you please help me? $('.submitB').click(function(){ var success = 0; var times = 0; var alertText; $('.input').each(function(){ times++; var serializedForms = $(this).serialize(); $.post('<?=$this->config->site_url()?>crud/additem/forms', serializedForms ,function(data){ if (data) { success++; } }); }); if (times) { alertText = "?? ????? " + success + " ???? ?? ??? " + times + " ?????."; alert(alertText); } }) the Arabic text just says "success + Entries from + times + were entered successfully".. thank you in advance.

    Read the article

  • Redirect uploaded files to another server, using nginx

    - by Serg ikS
    I am creating a web service of scheduled posts to some soc. network.Need help dealing with file uploads under high traffic. Process overview: User uploads files to SomeServer (not mine). SomeServer then responds with a JSON string. My web app should store that JSON response. Opt. 1 — Save, cURL POST, delete tmp The stupid way I made it work: User uploads files to MyWebApp; MyWebApp cURL's the file further to SomeServer, getting the response. Opt.2 — JS magic The smart way it could be perfect: User uploads the file directly to SomeServer, from within an iFrame; MyWebApp gets the response through JavaScript. But this is(?) impossible due to the 'Same Origin Policy', isn't it? Opt. 3 — nginx proxying? The better way for a production server: User uploads files to MyWebApp; nginx intercepts the file uploads and sends them directly to the SomeServer; JSON response is also intercepted by nginx and processed by MyWebApp. Does this make any sense, and what would be the nginx config for, say, /fileupload Location to proxy it to SomeServer ?

    Read the article

  • Alternative to 'Dispatch for ASP' deployment plug-in?

    - by Django Reinhardt
    Hi there, we've recently stumbled across the excellent Dispatch for ASP deployment plug in. It looks great apart from one thing: It doesn't work with Visual Studio 2010, at least for us, anyway. (It's supposed to work fine.) (Yes, we've tried everything: We've managed to get Dispatch working for another FTP site, but not the main one we regularly deploy to. We have managed to connect to our main site through FileZilla FTP, so the site itself is configured correctly. All settings have been triple checked, but the software still throws up weird errors (always to do with its internal libraries).) So does anyone know of any other comparable FTP-based, deployment plug-ins for Visual Studio? Here's what Dispatch does (and so any suggested replacement must do): Monitor any altered files in the project. When a file is changed, it's added to a list of files to be deployed. To deploy these files to the live site, all we need to do is click "Upload" and the plugin will connect via FTP to our live site and upload all the files. We can filter out any filenames we don't want to be monitored/uploaded (e.g. .cs or web.config or /Images/, etc.) I think that's all the features that we need. Thanks for any suggestions!

    Read the article

  • Emulating Visual Studio's Web Application "publish" at the command line

    - by cbp
    Hi, I am trying to automate my deployment process and am now thoroughly confused. I know that there are many questions on stackoverflow about this, but they all have different solutions and none of them work. I have a Web Application project which I usually publish by right-clicking and selecting "Publish". I get a dialog box where I use the following options: Build configuration: Release Publish method: File system Target location: C:\Deployments\MyWebsite Replace matching files with local copies I should mention that in the properties of the project I have "Items to deploy" set to "Only files needed to run this application". After running this, my entire solution is built, dependencies are resolved, build events are run, web.config transformations are applied and the website is copied to C:\Deployments\MyWebsite, although non-required files such as code-behind files are not copied. I have not been able to replicate this... in fact at this stage I'm not even sure which command line tool am I supposed to be using - msbuild, msdeploy or aspnet_compiler? This guy asks almost the same question but his solution doesn't work at all. For example, build events do not run correctly because the macros are not resolved. Whats more, the files do not get copied into the correct directory at all... I can't even begin to explain what happens!

    Read the article

  • Help with Struts Action mapping

    - by nicotine
    I am having a problem with my struts application it is a class enrollment app and when the user clicks on a "show enrolled courses" button it is supposed to show the courses they are enrolled in but it shows nothing at the moment. Struts/Apache does not return any errors, it Just shows a blank page and I cannot figure out why. My action mapping in my struts-config: <action path="/showEnrolled" type="actions.ShowEnrolledAction" name="UserFormEnrolled" scope="request" validate="true" input="/students/StudentMenu.jsp"> <forward name="success" path="/students/enrolled.jsp"/> </action> My link to the jsp enrolled.jsp page: <li><html:form action="/showEnrolled"> <html:hidden property="id" value= "<%=request.getRemoteUser()%>"/> <html:submit value = "View Enrolled Classes"/> </html:form> </li> When I click the link I get nothing but my menu on the page. The text headings for the page are not even displayed.

    Read the article

  • Enabling new admin action(button sales_order/view) in ACL

    - by latvian
    Hi, We created new action similar to 'hold', 'ship' and others in the 'sales_order/view' admin section that can be triggered by clicking at the button. Afterward, we added our new action to the ACL with the following code in config.xml: <acl> <resources> <admin> <children> <sales> <children> <order> <children> <actions translate="title"> <title>Actions</title> <children> <shipNew translate="title"><title>Ship Ups</title></shipNew> </children> </actions> </children> <sort_order>10</sort_order> </order> </children> </sales> </children> </admin> </resources> </acl> ACL functionality works, however, in the 'Resources Tree'(System/Permissions/Roles/Role Resources) our new action does never show up as selected(checked) even thou it is allowed for particular Role. I can see that from table 'admin_rule' with resource id for our new action that it is allowed, so it needs to be selected, but it is not. When trying to solve this issue i looked into the template(permissions/rolesedit.phtml) and I found that the 'resource tree' is draw with Javascript...thats where i got stock due to my limited knowledge in Javascript. Why the resource tree does not display our new ACL entry correctly, that is the check box is never checked? Thank You for helping margots

    Read the article

  • URL with no query parameters - How to distinguish.

    - by Broken Link
    Env: .NET 1.1 I got into this situation. Where I need to give a URL that someone could redirect them to our page. When they redirect they also need to tell us, what message I need to display on the page. Initially I thought of something like this. http://example.com/a.aspx?reason=100 http://example.com/a.aspx?reason=101 ... http://example.com/a.aspx?reason=115 So when we get this url based on 'reason' we can display different message. But the problem turns out to be that they can not send any query parameters at all. They want 15 difference URL's since they can't send query params. It doesn't make any sense to me to created 15 pages just to display a message. Any smart ideas,that have one URL and pass the 'reason' thru some means? EDIT: Options I'm thinking based on Answers Try HttpRequest.PathInfo or Second option I was thinking was to have a httphanlder read read the path like this - HttpContext.Request.Path based on path act. Ofcourse I will have some 15 entries like this in web.config. <add verb="*" path="reason1.ashx" type="WebApplication1.Class1, WebApplication1" /> <add verb="*" path="reason2.ashx" type="WebApplication1.Class1, WebApplication1" /> Does that look clean?

    Read the article

  • How to parse text as JavaScript?

    - by Danjah
    This question of mine (currently unanswered), drove me toward finding a better solution to what I'm attempting. My requirements: chunks of code which can be arbitrarily added into a document, without an identifier: [div class="thing"] [elements... /] [/div] the objects are scanned for and found by an external script: var things = yd.getElementsBy(function(el){ return yd.hasClass('thing'); },null,document ); the objects must be individually configurable, what I have currently is identifier-based: [div class="thing" id="thing0"] [elements... /] [script type="text/javascript"] new Thing().init({ id:'thing0'; }); [/script] [/div] So I need to ditch the identifier (id="thing0") so there are no duplicates when more than one chunk of the same code is added to a page I still need to be able to config these objects individually, without an identifier SO! All of that said, I wondered about creating a dynamic global variable within the script block of each added chunk of code, within its script tag. As each 'thing' is found, I figure it would be legit to grab the innerHTML of the script tag and somehow convert that text into a useable JS object. Discuss. Ok, don't discuss if you like, but if you get the drift then feel free to correct my wayward thinking or provide a better solution - please! d

    Read the article

  • Thoughts on GoGrid vs EC2

    - by Jason
    I am currently hosting my SaaS application at GoGrid (Microsoft stack). Here's what I have: Database Server - physical box, 12 GB RAM, 2 X Quad Core CPU (2.13 GHz Xeon E5506) 2 Web / App servers - cloud servers, 2 GB RAM, 2 VCPUs 300 GB monthly bandwidth I am paying around $900 / month for this. My web / app servers are busting at the seams and need to be upgraded to 4 GB of RAM. I also need a firewall, and GoGrid just added this service for an additional $200. After the upgrade, I will be paying around $1,400. I started looking at Amazon EC2, specifically this config: Database server - "High Memory Double Extra Large Instance" - 34 GB RAM, 13 EC2 compute units 2 Web / App servers - "Large Instance" - 7.5 GB RAM, 4 EC2 compute units If I go with 1 year reserved instances, my upfront cost would be $4,500 and my monthly would be $700. This comes to $1,075 / month when amortized. Amazon also includes a firewall for free. Here are my questions: Do any of you have experience running a database (especially SQL Server) on an EC2 instance? How did it perform compared to a dedicated machine? One of my major concerns is with disk I/O. Amazon's description of a compute unit is fairly vague. Any ideas on how the CPU performance on the database servers would compare? I am hoping that the Amazon solution will provide significantly better performance than my current or even improved GoGrid setup. Having a virtual database server would also be nice in terms of availability. Right now I would be in serious trouble if I had any hardware issues. Thanks for any insight...

    Read the article

  • WCF ServiceContract and svcutil issue

    - by Valko
    Hi, I have a public interface auto-generated bu svcutil: [System.ServiceModel.ServiceContractAttribute(Namespace="...", ConfigurationName="...")] public interface MyInterface Then I have asmx web service inheriting it and working fine. I am trying ot convert it to WCF but when I instrument the service (in asmx.cs code behind) with ServiceContract: [ServiceContract(Namespace = "...")] public class MyService : MyInterface Also I have cerated .svc file and added the system.serviceModel setting in the config file. The goal is to migrate the asmx service to WCF service. I've got this error: The service class of type MyService both defines a ServiceContract and inherits a ServiceContract from type MyInterface. Contract inheritance can only be used among interface types. If a class is marked with ServiceContractAttribute, it must be the only type in the hierarchy with ServiceContractAttribute. The asmx service is still working fine. Only the .svc is giving me issues. My question is how to fix that. MyInterface is an interface so I do not see what the problem is and why I've got the error anyway. Note I do not want to change MyInterface, because it is autogenerated from svcutil from my wsdl schema and I do not want this interface to be edited manually. The whole idea is to have the service types automatically genereted from WSDL and to save my team development efforts with manual editing. Any help is appreciated.

    Read the article

  • Publishing a WCF Server and client and their endpoints

    - by Ahmadreza
    Imagine developing a WCF solution with two projects (WCF Service/ and web application as WCF Client). As long as I'm developing these two projects in visual studio and referencing service to client (Web Application) as server reference there is no problem. Visual studio automatically assign a port for WCF server and configure all needed configuration including Server And Client binging to something like this in server: <service behaviorConfiguration="DefaultServiceBehavior" name="MYWCFProject.MyService"> <endpoint address="" binding="wsHttpBinding" contract="MYWCFProject.IMyService"> <identity> <dns value="localhost" /> </identity> </endpoint> <host> <baseAddresses> <add baseAddress="http://localhost:8731/MyService.svc" /> </baseAddresses> </host> </service> and in client: <client> <endpoint address="http://localhost:8731/MyService.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IMyService" contract="MyWCFProject.IMyService" name="WSHttpBinding_IMyService"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> The problem is I want to frequently publish this two project in two different servers as my production servers and Service url will be "http://mywcfdomain/MyService.svc". I don't want to change config file every time I publish my server project. The question is: is there any feature in Visual Studio 2008 to automatically change the URLs or I have to define two different endpoints and I set them within my code (based on a parameter in my configuration for example Development/Published).

    Read the article

  • mod_rewrite with location-based ACL in apache?

    - by Alexey
    Hi. There is a CGI-script that provides some API for our customers. Call syntax is: script.cgi?module=<str>&func=<str>[&other-options] The task is to make different authentiction rules for different modules. Optionally, it will be great to have nice URLs. My config: <VirtualHost *:80> DocumentRoot /var/www/example ServerName example.com # Global policy is to deny all <Location /> Order deny,allow Deny from all </Location> # doesn't work :( <Location /api/foo> Order deny,allow Deny from all Allow from 127.0.0.1 </Location> RewriteEngine On # The only allowed type of requests: RewriteRule /api/(.+?)/(.+) /cgi-bin/api.cgi?module=$1&func=$2 [PT] # All others are forbidden: RewriteRule /(.*) - [F] RewriteLog /var/log/apache2/rewrite.log RewriteLogLevel 5 ScriptAlias /cgi-bin /var/www/example <Directory /var/www/example> Options -Indexes AddHandler cgi-script .cgi </Directory> </VirtualHost> Well, I know that problem is order of processing that directives. <Location>s will be processed after mod_rewrite has done its work. But I believe there is a way to change it. :) Using of standard Order deny,allow + Allow from <something> directives is preferable because it's commonly used in other places like this. Thank you for your attention. :)

    Read the article

  • ASP.Net MVC, JS injection and System.ArgumentException - Illegal Characters in path

    - by Mose
    Hi, In my ASP.Net MVC application, I use custom error handling. I want to perform custom actions for each error case I meet in my application. So I override Application_Error, get the Server.GetLastError(); and do my business depending on the exception, the current user, the current URL (the application runs on many domains), the user IP, and many others. Obviousely, the application is often the target of hackers. In almost all the case it's not a problem to detect and manage it, but for some JS URL attacks, my error handling does not perform what I want it to do. Ex (from logs) : http://localhost:1809/Scripts/]||!o.support.htmlSerialize&&[1 When I got such an URL, an exception is raised when accessing the ConnectionStrings section in the web.config, and I can't even redirect to another URL. It leads to a "System.ArgumentException - Illegal Characters in path, etc." The screenshot below shows the problem : http://screencast.com/t/Y2I1YWU4 An obvious solution is to write a HTTP module to filter the urls before they reach my application, but I'd like to avoid it because : I like having the whole security being managed in one place (in the Application_Error() method) In the module I cannot access the whole data I have in the application itself (application specific data I don't want to debate here) Questions : Did you meet this problem ? How did you manage it ? Thanks for you suggestions, Mose

    Read the article

< Previous Page | 459 460 461 462 463 464 465 466 467 468 469 470  | Next Page >