Search Results

Search found 16001 results on 641 pages for 'convention over configuration'.

Page 559/641 | < Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >

  • What makes deployment successful for some users and unsuccessful for others?

    - by Julien
    I am trying to deploy a Visual C++ application (developed with Microsoft Visual Studio 2008) using a Setup and Deployment Project. After installation, users on some target computers get the following error message after launching the application executable: “This application has failed to start because the application configuration is incorrect. Reinstalling the application may fix the problem.” Another user after installation could run the application properly. I cannot find the root cause of this problem, despite spending several hours on the Visual Studio help files and online forums (most postings date back to 2006). Does anyone at Stack Overflow have a suggestion? Thanks in advance. Additional details appear below. The application uses FLTK 1.1.9 for a GUI library, as well as some Boost 1.39 libraries (regex, lexical_cast, date_time, math). I made sure I am trying to deploy the release version (not the debug version) of the application. The Runtime library in the Code Generation settings is Multi-threaded DLL (/MD). The dependency walker of myapp.exe lists the following DLLs: wsock32.dll, comctl32.dll, kernel32.dll, user32.dll, gdi32.dll, shell32.dll, ole32.dll, mvcp90.dll, msvcr90.dll. In the Setup and Deployment Project, I add the following DLLs to the File System on Target Machine: fltkdlld.dll, and a folder named Microsoft.VC90.CRT with msvcm90.dll, msvcp90.dll, mcvcr90.dll and Microsoft.VC90.CRT.manifest. The installation process on the target computers getting the error message requires having the .Net Framework 3.5 installed first. Any suggestion? Thanks in advance!

    Read the article

  • Apache and multiple tomcats proxy

    - by Sebb77
    I have 1 apache server and two tomcat servers with two different applications. I want to use the apache as a proxy so that the user can access the application from the same url using different paths. e.g.: localhost/app1 --> localhost:8080/app1 localhost/app2 --> localhost:8181/app2 I tried all 3 mod proxy of apache (mod_jk, mod_proxy_http and mod_proxy_ajp) but the first application is working, whilst the second is not accessible. This is the apache configuration I'm using: ProxyPassMatch ^(/.*\.gif)$ ! ProxyPassMatch ^(/.*\.css)$ ! ProxyPassMatch ^(/.*\.png)$ ! ProxyPassMatch ^(/.*\.js)$ ! ProxyPassMatch ^(/.*\.jpeg)$ ! ProxyPassMatch ^(/.*\.jpg)$ ! ProxyRequests Off ProxyPass /app1 ajp://localhost:8009/ ProxyPassReverse /app1 ajp://localhost:8009/ ProxyPass /app2 ajp://localhost:8909/ ProxyPassReverse /app2 ajp://localhost:8909/ With the above, I manage to view the tomcat root application using localhost/app1, but I get "Service Temporarily Unavailable" (apache error) when accessing app2. I need to keep the tomcat servers separate because I need to restart one of the applications often and it is not an option to save both apps on the same tomcat. Can someone point me out what I'm doing wrong? Thank you all.

    Read the article

  • "You have already activated" message even when using bundle exec

    - by juanpastas
    I am installing gems in my Gemfile in shared path as Capistrano does by default, and when I run: bundle exec rake assets:precompile RAILS_ENV=production I get: You have already activated rake 0.9.2.2, but your Gemfile requires rake 10.0.4. Using bundle exec may solve this. See that: cat Gemfile.lock | grep rake returns: rake (>= 0.8.7) rake (10.0.4) This is my gem environment output: - RUBYGEMS VERSION: 1.8.24 - RUBY VERSION: 1.9.3 (2013-06-27 patchlevel 448) [x86_64-linux] - INSTALLATION DIRECTORY: /home/bitnami/my_app/shared/bundle/ruby/1.9.1/ - RUBY EXECUTABLE: /opt/bitnami/ruby/bin/ruby - EXECUTABLE DIRECTORY: /home/bitnami/my_app/shared/bundle/ruby/1.9.1/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-linux - GEM PATHS: - /home/bitnami/my_app/shared/bundle/ruby/1.9.1/ - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - "gemhome" => "/home/bitnami/my_app/shared/bundle/ruby/1.9.1/" - "gempath" => ["/home/bitnami/my_app/shared/bundle/ruby/1.9.1/"] - REMOTE SOURCES: - http://rubygems.org/ Update which -a rake /opt/bitnami/rvm/bin/rake /opt/bitnami/ruby/bin/rake Update 2 I tried giving full path to rake, but same problem

    Read the article

  • Developing on both Windows & Linux machines simultaneously

    - by Jamie
    Sorry for the bad title (couldn't think of a better way to describe it) I have a windows machine which I do development on. However, I have a new project which needs to interact with a linux system (executing linux commands etc.). So, obviously I can't do development on my windows machine..and I don't wish to code on the dev machine, svn commit and then svn update it on the linux machine. Is there a way where any changes I make on my dev machine will be quickly mirrored to the linux machine? SVN is not a very quick alternative and of course some changes will be very minor. Any ideas? A network share I guess....but that's not very pretty (bit slow too). As fellow developers I would like to know if you've been in a similar situation and how you've resolved it. On a furthernote, I can't just install Ubuntu as my development machine and mirror the commands, applications etc. from the linux machine because it's a cluster 'master' machine and so therefore it has quite a special configuration. Thanks guys! EDIT: I've also thought about having web services on the linux machine and then just calling them from code thus seperating platform development dependency. What do you think about that too? thanks

    Read the article

  • What are simple instructions for creating a Python package structure and egg?

    - by froadie
    I just completed my first (minor) Python project, and my boss wants me to package it nicely so that it can be distributed and called from other programs easily. He suggested I look into eggs. I've been googling and reading, but I'm just getting confused. Most of the sites I'm looking at explain how to use Python eggs that were already created, or how to create an egg from a setup.py file (which I don't yet have). All I have now is an Eclipse pydev project with about 4 modules and a settings/configuration file. In easy steps, how do I go about structuring it into folders/packages and compiling it into an egg? And once it's an egg, what do I have to know about deploying/building/using it? I'm really starting from scratch here, so don't assume I know anything; simple step-by-step instructions would be really helpful... These are some of the sites that I've been looking at so far: http://peak.telecommunity.com/DevCenter/PythonEggs http://www.packtpub.com/article/writing-a-package-in-python http://www.ibm.com/developerworks/library/l-cppeak3.html#N10232 I've also browsed a few SO questions but haven't really found what I need. Thanks!

    Read the article

  • Algorithm for finding similar users through a join table

    - by Gdeglin
    I have an application where users can select a variety of interests from around 300 possible interests. Each selected interest is stored in a join table containing the columns user_id and interest_id. Typical users select around 50 interests out of the 300. I would like to build a system where users can find the top 20 users that have the most interests in common with them. Right now I am able to accomplish this using the following query: SELECT i2.user_id, count(i2.interest_id) AS count FROM interests_users as i1, interests_users as i2 WHERE i1.interest_id = i2.interest_id AND i1.user_id = 35 GROUP BY i2.user_id ORDER BY count DESC LIMIT 20; However, this query takes approximately 500 milliseconds to execute with 10,000 users and 500,000 rows in the join table. All indexes and database configuration settings have been tuned to the best of my ability. I have also tried avoiding the use of joins altogether using the following query: select user_id,count(interest_id) count from interests_users where interest_id in (13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64,65,66,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,508) group by user_id order by count desc limit 20; But this one is even slower (~800 milliseconds). How could I best lower the time that I can gather this kind of data to below 100 milliseconds? I have considered putting this data into a graph database like Neo4j, but I am not sure if that is the easiest solution or if it would even be faster than what I am currently doing.

    Read the article

  • Why ntp settle time toe long?

    - by fercis
    I simply try to set-up NTP to my system. Both server and clients will run on my computer which are linked together with local link. One of them will have the reference clock. Both the server and Client are linux Ubuntu. I install ntp daemon to both sides. In clients, I enter the ip address of the server to /etc/ntp.conf. Everything works fine. However, the setting of the time to correct time in client side takes too long time (around 17 minutes). Is it possible to gather correct time just at startup. I write some code that regularly calls "ntpdate " by system call and the problem is solved but there has to be something that allows me to decrease the poll time of the client to 1-2 minutes. There are some settings as maxpoll - minpoll in ntp.conf, but I couldn't manage to understand their function, because with the best configuration (minpoll 4? 16 seconds?) I also cannot see that client side corrects its time before 10 minutes. In addition, in some cases my client is an embedded system (ARM - IGEP board) and it always opens with an irrelevant date (2-3 years ago). So the time that takes to correct the time should not depend on the time difference also.

    Read the article

  • Cannot implicity convert type void to System.Threading.Tasks.Task<bool>

    - by sagesky36
    I have a WCF Service that contains the following method. All the methods in the service are asynchrounous and compile just fine. public async Task<Boolean> ValidateRegistrationAsync(String strUserName) { try { using (YeagerTechEntities DbContext = new YeagerTechEntities()) { DbContext.Configuration.ProxyCreationEnabled = false; DbContext.Database.Connection.Open(); var reg = await DbContext.aspnet_Users.FirstOrDefaultAsync(f => f.UserName == strUserName); if (reg != null) return true; else return false; } } catch (Exception) { throw; } } My client application was set to access the WCF service with the check box for the "Allow generation of asynchronous operations" and it generated the proxy just fine. I am receiving the above subject error when trying to call this WCF service method from my client with the following code. Mind you, I know what the error message means, but this is my first time trying to call an asynchronous task in a WCF service from a client. Task<Boolean> blnMbrShip = db.ValidateRegistrationAsync(FormsAuthentication.Decrypt(cn.Value).Name); What do I need to do to properly call the method so the design time compile error disappears? Thanks so much in advance...

    Read the article

  • mercurial hg - pushing to a cloned repositor via APACHE errors with "repository is unrelated"

    - by Ash
    Two scenarios, one work one doesn't when they both should: Scenario #1: (DOES NOT work via apache) 2 repos on Server SERVER: Repo "A", Repo "B" cloned from repo A via http://SERVER/HG/A On client: Repo A cloned from http://SERVER/HG/A Repo B cloned from http://SERVER/HG/B Added a file to repo A from client and commited & pushed it up to http://SERVER/HG/A ...WORKS Added a file to repo B from client and commited & pushed it up to http://SERVER/HG/B ...ERROR with abort: repository is unrelated, it only works if I -f (force) the push Scenario #2: (works via file system) On Server SERVER: Repo "A", Repo "B" cloned from E:/HG/A On client: Repo A cloned from E:/HG/A Repo B cloned from E:/HG/B Added a file to repo A from client and commited & pushed it up to E:/HG/A ...WORKS Added a file to repo B from client and commited & pushed it up to E:/HG/B ...WORKS Conclusion:...Something in the apache configuration or in the integration between apache & mercurial is making the repo "unrelated".... Any ideas??? Why do I need to force in the first scenario but do not have to in the second?? ...and i tried both scenarios via tortoisehg as well as command line.

    Read the article

  • access JRUN jndi environment vaiables from coldfusion (java)

    - by jake
    I want to put some instance specific configuration information in JNDI. I looked at the information here: http://www.adobe.com/support/jrun/working_jrun/jrun4_jndi_and_j2ee_enc/jrun4_jndi_and_j2ee_enc03.html I have added this node to the web.xml: <env-entry> <description>Administrator e-mail address</description> <env-entry-name>adminemail</env-entry-name> <env-entry-value>[email protected]</env-entry-value> <env-entry-type>java.lang.String</env-entry-type> </env-entry> In coldfusion I have tried several different approaches to querying the data: <cfset ctx = createobject("java","javax.naming.InitialContext") > <cfset val = ctx.lookup("java:comp/env") > That lookup returns a jrun.naming.JRunNamingContext. If i preform a lookup on ctx for the specific binding I am adding I get an error. <cfset val = ctx.lookup("java:comp/env/adminemail") > No such binding: adminemail Preforming a listBindings returns an empty jrun.naming.JRunNamingEnumeration. <cfset val = ctx.listBindings("java:comp/env") > I only want to put a string value (probably several) into the ENC (or any JNDI directory at this point).

    Read the article

  • Application Design: Single vs. Multiple Hits to the DB

    - by shyneman
    I'm building a service that performs a set of configured activities based on the type of request that it receives. Each activity involves going to the database and retrieving/updating some kind of information. The logic for each activity can be generalized and re-used across different request types. The activities may need to participate in a transaction for the duration of the servicing the request. One option, I'm considering is having each activity maintain its own access to DAL/database. This fully encapsulates the activity into a stand-alone re-usable piece, but hitting the database multiple times for one request doesn't seem like a viable option. I don't really know how to easily implement the concept of a transaction across the multiple activities here either. The second option is to encapsulate ALL the activities into one big activity and hit the database once. But this does not allow re-use and configuration of these activities for different requests. Does anyone have any suggestions and input about what should be the best way to approach my problem? Thanks for any help.

    Read the article

  • To what extent should code try to explain fatal exceptions?

    - by Andrzej Doyle
    I suspect that all non-trivial software is likely to experience situations where it hits an external problem it cannot work around and thus needs to fail. This might be due to bad configuration, an external server being down, disk full, etc. In these situations, especially if the software is running in non-interactive mode, I expect that all one can really do is log an error and wait for the admin to read the logs and fix the problem. If someone happens to interact with the software in the meantime, e.g. a request comes in to a server that failed to initialize properly, then perhaps an appropriate hint can be given to check the logs and maybe even the error can be echoed (depending on whether you can tell if they're a technical guy as opposed to a business user). For the moment though let's not think too hard about this part. My question is, to what extent should the software be responsible for trying to explain the meaning of the fatal error? In general, how much competence/knowledge are you allowed to presume on administrators of the software, and how much should you include troubleshooting information and potential resolution steps when logging fatal errors? Of course if there's something that's unique to the runtime context this should definitely be logged; but lets assume your software needs to talk to Active Directory via LDAP and gets back an error "[LDAP: error code 49 - 80090308: LdapErr: DSID-0C090334, comment: AcceptSecurityContext error, data 525, vece]". Is it reasonable to assume that the maintainers will be able to Google the error code and work out what it means, or should the software try to parse the error code and log that this is caused by an incorrect user DN in the LDAP config? I don't know if there is a definitive best-practices answer for this, so I'm keen to hear a variety of views.

    Read the article

  • (500) Internal Server Error with C# and Web Dev 2008 Express

    - by user32848
    The code below is generic, found in a variety of places, including a book I have. I have used it as a base for a working program in VS 2005. Now I've resurrected it with my current Visual Web Developer 2008 Express Edition and I seem to have problems connecting it to my default development server (I don't have IIS on my XP). The error is: (500) Internal Server Error. Is this saying what I thought it did (above) or something else, and how do I solve this problem? using System; using System.Collections; using System.Collections.Generic; using System.Configuration; using System.IO; using System.Net; using System.Linq; using System.Web; using System.Web.Security; using System.Web.UI; using System.Web.UI.WebControls; using System.Web.UI.WebControls.WebParts; using System.Web.UI.HtmlControls; using System.Text.RegularExpressions; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string strResult = ""; string url = "http://weather.unisys.com"; WebResponse objResponse; WebRequest objRequest; try { objRequest = System.Net.HttpWebRequest.Create(url); } catch { objRequest = System.Net.HttpWebRequest.Create("http://"+ url); } objResponse = objRequest.GetResponse(); using (StreamReader sr = new StreamReader(objResponse.GetResponseStream())) { strResult = sr.ReadToEnd(); sr.Close(); } } }

    Read the article

  • Beginner problem / error with ansi-c and gcc under ubuntu.

    - by Framester
    Hi, I am just starting programming ansi c with gcc under ubuntu (9.04). I get following error messages: error messages: main.c:6: error: expected identifier or ‘(’ before ‘/’ token In file included from /usr/include/stdio.h:75, from main.c:9: /usr/include/libio.h:332: error: expected specifier-qualifier-list before ‘size_t’ /usr/include/libio.h:364: error: expected declaration specifiers or ‘...’ before ‘size_t’ /usr/include/libio.h:373: error: expected declaration specifiers or ‘...’ before ‘size_t’ /usr/include/libio.h:493: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘_IO_sgetn’ In file included from main.c:9: /usr/include/stdio.h:314: error: expected declaration specifiers or ‘...’ before ‘size_t’ /usr/include/stdio.h:682: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘fread’ /usr/include/stdio.h:688: error: expected ‘=’, ‘,’, ‘;’, ‘asm’ or ‘__attribute__’ before ‘fwrite’ main.c:12: error: expected identifier or ‘(’ before ‘/’ token I assume it is a very simple problem, maybe in the configuration of ubuntu or gcc. I am new to programming under linux as well. I googled for help and went through a tutorial but could not find an answer. Thank you! code: #include <stdio.h> #include <math.h> int main() { printf("TestOutput\n"); return (0); } command: ~/Documents/projects/Trials$ gcc -Wall -ansi main.c

    Read the article

  • Using Sphinx with a distutils-built C extension

    - by detly
    I have written a Python module including a submodule written in C: the module itself is called foo and the C part is foo._bar. The structure looks like: src/ foo/__init__.py <- contains the public stuff foo/_bar/bar.c <- the C extension doc/ <- Sphinx configuration conf.py ... foo/__init__.py imports _bar to augment it, and the useful stuff is exposed in the foo module. This works fine when it's built, but obviously won't work in uncompiled form, since _bar doesn't exist until it's built. I'd like to use Sphinx to document the project, and use the autodoc extension on the foo module. This means I need to build the project before I can build the documentation. Since I build with distutils, the built module ends up in some variably named dir build/lib.linux-ARCH-PYVERSION — which means I can't just hard-code the directory into a Sphinx' conf.py. So how do I configure my distutils setup.py script to run the Sphinx builder over the built module? For completeness, here's the call to setup (the 'fake' things are custom builders that subclass build and build_ext): setup(cmdclass = { 'fake': fake, 'build_ext_fake' : build_ext_fake }, package_dir = {'': 'src'}, packages = ['foo'], name = 'foo', version = '0.1', description = desc, ext_modules = [module_real])

    Read the article

  • Flash builder 4 - change output filename using external build-config.xml (not Ant)

    - by Casey
    I'm trying to change the output filename with a config file loaded via the compiler option -load-config. It looks like this in my compiler arguments: -load-config+=build-config.xml. I've tried the following: <flex-config> <o>absolute/path/to/filename</o> </flex-config> and <flex-config> <output>absolute/path/to/filename</output> </flex-config> and <flex-config> <compiler> <o>absolute/path/to/filename</o> </compiler> </flex-config> and <flex-config> <compiler> <output>absolute/path/to/filename</output> </compiler> </flex-config> but none have worked. I'm on a PC using Flash Builder 4. Has anyone else done this? Also, ideally, I want to use a relative path instead of absolute. I can't get this to work either, even if I do so in the "additional compiler arguments" field of the Project configuration. Thanks in advance!

    Read the article

  • Castle, sharing a transient component between a decorator and a decorated component

    - by Marius
    Consider the following example: public interface ITask { void Execute(); } public class LoggingTaskRunner : ITask { private readonly ITask _taskToDecorate; private readonly MessageBuffer _messageBuffer; public LoggingTaskRunner(ITask taskToDecorate, MessageBuffer messageBuffer) { _taskToDecorate = taskToDecorate; _messageBuffer = messageBuffer; } public void Execute() { _taskToDecorate.Execute(); Log(_messageBuffer); } private void Log(MessageBuffer messageBuffer) {} } public class TaskRunner : ITask { public TaskRunner(MessageBuffer messageBuffer) { } public void Execute() { } } public class MessageBuffer { } public class Configuration { public void Configure() { IWindsorContainer container = null; container.Register( Component.For<MessageBuffer>() .LifeStyle.Transient); container.Register( Component.For<ITask>() .ImplementedBy<LoggingTaskRunner>() .ServiceOverrides(ServiceOverride.ForKey("taskToDecorate").Eq("task.to.decorate"))); container.Register( Component.For<ITask>() .ImplementedBy<TaskRunner>() .Named("task.to.decorate")); } } How can I make Windsor instantiate the "shared" transient component so that both "Decorator" and "Decorated" gets the same instance? Edit: since the design is being critiqued I am posting something closer to what is being done in the app. Maybe someone can suggest a better solution (if sharing the transient resource between a logger and the true task is considered a bad design)

    Read the article

  • VB dataset issue

    - by Gabriel
    Hi. The idea was to create a message box that stores my user name, message, and post datetime into the database as messages are sent. Soon came to realise, what if the user changed his name? So I decided to use the user id (icn) to identify the message poster instead. However, my chunk of codes keep giving me the same error. Says that there are no rows in the dataset ds2. I've tried my Query on my SQL and it works perfectly so I really really need help to spot the error in my chunk of codes here. Protected Sub Page_Load(ByVal sender As Object, ByVal e As System.EventArgs) Handles Me.Load Dim name As String Dim icn As String Dim message As String Dim time As String Dim tags As String = "" Dim strConn As System.Configuration.ConnectionStringSettings strConn = ConfigurationManager.ConnectionStrings("ufadb") Dim conn As SqlConnection = New SqlConnection(strConn.ToString()) Dim cmd As New SqlCommand("Select * From Message", conn) Dim daMessages As SqlDataAdapter = New SqlDataAdapter(cmd) Dim ds As New DataSet cmd.Connection.Open() daMessages.Fill(ds, "Messages") cmd.Connection.Close() If ds.Tables("Messages").Rows.Count > 0 Then Dim n As Integer = ds.Tables("Messages").Rows.Count Dim i As Integer For i = 0 To n - 1 icn = ds.Tables("Messages").Rows(i).Item("icn") Dim cmd2 As New SqlCommand("SELECT name FROM Member inner join Message ON Member.icn = Message.icn WHERE message.icn = @icn", conn) cmd2.Parameters.AddWithValue("@icn", icn) Dim daName As SqlDataAdapter = New SqlDataAdapter(cmd2) Dim ds2 As New DataSet cmd2.Connection.Open() daName.Fill(ds2, "PosterName") cmd2.Connection.Close() name = ds2.Tables("PosterName").Rows(0).Item("name") message = ds.Tables("Messages").Rows(i).Item("message") time = ds.Tables("Messages").Rows(i).Item("timePosted") tags = time + vbCrLf + name + ": " + vbCrLf + message + vbCrLf + tags Next txtBoard.Text = tags Else txtBoard.Text = "nothing to display" End If End Sub Help will be very much appreciated as I have been on this simple problem for 2 days.

    Read the article

  • Do you have health checks in your web app or web site?

    - by Pekka
    I have built PHP based "health check" scripts for several projects, but they were always custom-made for the occasion and not written for abstraction as an independent product. I would like to know whether such a solution exists. What I meam by "health check" is a protected web page that functions much like a suite of unit tests, but on a more operational level, showing red/yellow/green statuses for things like Are the cache directories writable? Is the PHP version correct, are required extensions installed? Is the configuration file protected from writing? Is the database server reachable? Do the key tables exist in the database? Is there enough disk space available? Is the site's front page reachable and renders fully ( = no PHP errors)? Do the project's libraries' MD5 checksums match the original ones? Do you do this - or parts of it - in your applications and web sites? Are there any standardized tools for this that bring along all the functionality to perform the tests (ideally as plugins), and just need to be configured accordingly? Is there a way to set this up using one of the Unit Testing frameworks available for PHP (preferably PHPUnit)? If so, do you know any resources / tutorials outlining how?

    Read the article

  • Drupal advanced ACLs for "untrusted" administrators

    - by redShadow
    I have a multi-site Drupal-6 installation containing websites of different customers. On each site, there is an "administrator" role that includes mainly the customer's account. We want to give as many permissions as possible to this privileged user, but this could bring to security leaks using just the Drupal Core permissions management system. The main thing to avoid is the customer account being able to run PHP code on the server (that would be like being logged on the server as the www-data user.. sounds really bad). To avoid that, it is not sufficient to deny PHP code evaluation for the role. Since the administrator role must have permissions to manage users, he could also change the password of the user #1 and login in the site as superadmin. The second goal would be to deny also some "confusing" administrative pages (such as module selection) but not others (such as site informations configuration, or theme selection, etc.) I found the User One module that seems to fix the first problem, but I have no idea on how to solve the second one. I found some modules around, but no-one seems to fit.. it seems like the most ACLs are thought to protect the content, and not the site itself, as if the site administrator would always be the server owner itself..

    Read the article

  • GNU Make - Dependencies on non program code

    - by Tim Post
    A requirement for a program I am writing is that it must be able to trust a configuration file. To accomplish this, I am using several kinds of hashing algorithms to generate a hash of the file at compile time, this produces a header with the hashes as constants. Dependencies for this are pretty straight forward, my program depends on config_hash.h, which has a target that produces it. The makefile looks something like this : config_hash.h: $(SH) genhash config/config_file.cfg > $(srcdir)/config_hash.h $(PROGRAM): config_hash.h $(PROGRAM_DEPS) $(CC) ... ... ... I'm using the -M option to gcc, which is great for dealing with dependencies. If my header changes, my program is rebuilt. My problem is, I need to be able to tell if the config file has changed, so that config_hash.h is re-generated. I'm not quite sure how explain that kind of dependency to GNU make. I've tried listing config/config_file.cfg as a dependency for config_hash.h, and providing a .PHONY target for config_file.cfg without success. Obviously, I can't rely on the -M switch to gcc to help me here, since the config file is not a part of any object code. Any suggestions? Unfortunately, I can't post much of the Makefile, or I would have just posted the whole thing.

    Read the article

  • How do I know which include path will be used in PHP?

    - by Joe Majewski
    When I run phpinfo() and look by the Configuration category under PHP Core, I see a directive titled include_path, with a local value and a master value. In this case, my local value is set to .: ./include: ../include: /usr/share/php: /usr/share/php/smarty: /usr/share/pear and my master value is set to .: /usr/share/php: /usr/share/pear: /usr/share/php/pear: /usr/share/php/smarty The reason I am trying to learn how this works is because there is a file in the system I am working on titled Smarty.class.php, which I'm sure sounds very familiar to anyone who uses Smarty Templating Engine. One of the PHP files has the following includes: require_once("Smarty.class.php"); require_once("user_info_class.inc"); The file user_info_class.inc is in the same directory as the file making the include, which makes perfect sense to me, and is the way that I've always referenced files. I decided that I wanted to open up the Smarty.class.php file and had assumed it would be in the same directory, but it was not. After doing a bit of digging, I discovered those php_ini variables, and was finally able to locate the file in the directory usr/share/php/smarty/. So it would seem that when making an include, it follows some sort of order between the Local and Master values for the include_path. Assuming that my deductions were correct thus far, can someone explain the order in which PHP searches for the files to be included?

    Read the article

  • EF Many-to-many dbset.Include in DAL on GenericRepository

    - by Bryant
    I can't get the QueryObjectGraph to add INCLUDE child tables if my life depended on it...what am I missing? Stuck for third day on something that should be simple :-/ DAL: public abstract class RepositoryBase<T> where T : class { private MyLPL2Context dataContext; private readonly IDbSet<T> dbset; protected RepositoryBase(IDatabaseFactory databaseFactory) { DatabaseFactory = databaseFactory; dbset = DataContext.Set<T>(); DataContext.Configuration.LazyLoadingEnabled = true; } protected IDatabaseFactory DatabaseFactory { get; private set; } protected MyLPL2Context DataContext { get { return dataContext ?? (dataContext = DatabaseFactory.Get()); } } public IQueryable<T> QueryObjectGraph(Expression<Func<T, bool>> filter, params string[] children) { foreach (var child in children) { dbset.Include(child); } return dbset.Where(filter); } ... DAL repositories public interface IBreed_TranslatedSqlRepository : ISqlRepository<Breed_Translated> { } public class Breed_TranslatedSqlRepository : RepositoryBase<Breed_Translated>, IBreed_TranslatedSqlRepository { public Breed_TranslatedSqlRepository(IDatabaseFactory databaseFactory) : base(databaseFactory) {} } BLL Repo: public IQueryable<Breed_Translated> QueryObjectGraph(Expression<Func<Breed_Translated, bool>> filter, params string[] children) { return _r.QueryObjectGraph(filter, children); } Controller: var breeds1 = _breedTranslatedRepository .QueryObjectGraph(b => b.Culture == culture, new string[] { "AnimalType_Breed" }) .ToList(); I can't get to Breed.AnimalType_Breed.AnimalTypeId ..I can drill as far as Breed.AnimalType_Breed then the intelisense expects an expression? Cues if any, DB Tables: bold is many-to-many Breed, Breed_Translated, AnimalType_Breed, AnimalType, ...

    Read the article

  • Can a page opt out of IIS 7 compression?

    - by Glen Little
    My pages are automatically being compressed by IIS7 with GZIP. That is great... but, for one particular page, I need to stream it to the user, using Response.Flush() when needed. But when the output is being compressed, the IIS server seems to collect all my output until the page is done before compressing and sending it to the client. That nullifies my attempt to Flush the content out to the user. Is there a way that I can have this one page opt out of the compression? One possible option I've determined that if I manually set the content type to one that does not match the IIS configuration at c:\windows\system32\inetsrv\config\applicationhost.config, then IIS will not compress it. Eg. Response.ContentType = "x-text/html". This works okay with IE8, as it falls back to display the HTML. But Firefox will ask the user what to do with the unknown file type. This could work, if there was another Mime Type I could use that browsers would accept as HTML, that is not matched in the applicationhost.config. For reference, these are the mime types that will be compressed: <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/x-javascript" enabled="true" /> <add mimeType="application/atom+xml" enabled="true" /> <add mimeType="application/xaml+xml" enabled="true" /> Others options? Are there other options to opt out of compression?

    Read the article

  • Configurating JOOMLA's e-mail notification for new account

    - by Dion
    I'm using Joomla 1.5 to create a local site for my office. The site will be accessed locally via intranet, and my PC will be the localhost for the site. I'm using a Login pluggin, so that anyone who wanted to enter the site should create an account. In JOOMLA, all user who created their account for the first time will receive a notification e-mail like : "Hello pras, You have been added as a User to Information Center by an Administrator. This e-mail contains your username and password to log in to http://localhost/yaddayadda/ Username: hadisuryo.prasetio Password: xxxx Please do not respond to this message as it is automatically generated and is for information purposes only." but if the user click the URL in the mail, which is, "localhost/yaddayadda/" they will not be directed to my site, but to their own PC's localhost.... My question is : How can I Modified the e-mail or the site configuration so that the URL will not be "localhost/yaddayadda/" anymore, but will be "(My-IP adress)/yaddayadda" I'm not going to host my site to a web hosting service, just using my PC as a host. I've been trying to trace on each config and .ini files...it seems that i have to do something with the "JURI" function or the "$mosConfig_live_site" on the backlink.php file $mosConfig_absolute_path = JPATH_SITE; $mosConfig_live_site = JURI :: base(); $url_array = explode('/', $_SERVER['REQUEST_URI']); Can anyone give me assistance ? Thank You

    Read the article

< Previous Page | 555 556 557 558 559 560 561 562 563 564 565 566  | Next Page >