Search Results

Search found 10033 results on 402 pages for 'topic template'.

Page 325/402 | < Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >

  • A good php framework in 2012

    - by Jormundir
    I've done a lot of googling around this, and practically all of the answers I find are pre 2011, and are answered in the usual, here are the 5 most popular frameworks... So I'd like to update this topic for 2012, I'm going to build a web application with a pretty complex back-end system driving it, and I'd like to use a framework so I don't have to reinvent the wheel. My application will be hugely user based, so I would appreciate a built in authentication/validation system. (When this is missing it takes me a good 2 weeks of intense and frivolous research to try to pick the "best" one (I don't want to roll my own, I don't think I'd do a better job than what's out there). I've looked into a tried a few, so I'll give you what I like and don't like, but I don't want to bias answers too much. I don't like: Frameworks that auto-generate bloated code. If they have the feature, fine, but if I have to use it, I get frustrated. Backwards compatibility with php4, eww. I don't need backwards compatibility at all. I like: Getting up and running quickly (but without all the auto-generation bogus), what I mean by this is that all the essentials are there, so I don't have to come to a grinding halt to research what the best 3rd party plugin is to get the feature I need. Thorough documentation, good tutorials. Good presentation of these materials. Please explain why your framework suggestion is good, don't just give the name of a framework without any justification. Thanks!

    Read the article

  • Use external inline script as local function

    - by Aidan
    Had this closed once as a duplicate, yet the so-called duplicate DID NOT actually address my whole question. I have found this script that, when run inline, returns your IP. <script type="text/javascript" src="http://l2.io/ip.js"></script> http://l2.io/ip.js Has nothing more than a line of code that says document.write('123.123.123.123'); (But obviously with the user's IP address) I want to use this IP address as a return string for a function DEFINED EXTERNALLY, BUT STILL ON MY DOMAIN. That is, I have a "scripts.js" that contains all the scripts I wish to use, and I would like to include it in that list as a local function that calls to the 12.io function, but javascript won't allow the < tags, so I am unsure as to how to do this. I.e. function getIP() { return (THAT SCRIPT'S OUTPUT); } This is the topic this was supposedly a duplicate of, and it is very similar. Get ip address with javascript However, this DOES NOT address defining as a forwarded script it in my own script file.

    Read the article

  • how to pass instance variables between handlers (routes) in sinatra (without flash, sessions, class variable or db)?

    - by jj_
    Say you have: get '/' do haml :index end get '/form' do haml :form end post '/form' do @message = params[:message] redirect to ('/') --- how to pass @message here? end I'd like the @message instance variable to be available (passed to) in "/" action as well, so I can show it in haml view. How can I do that without using session, flash, a @@class_variable, or db persistence ? I'd simply like to pass values as if I was working with passing values between methods. I don't want to use session cookies because user could have them turned off, I don't like it being a class variable which is exposed to all code, and I don't need to overhead of a db. Thanks edit: This is another question explaining a very easy way to deal with this in rails Passing parameters in rails redirect_to This is some more info i gathered around from forums. The following works for rails, i've tried it in Sinatra but no luck, but please try it, maybe I did something wrong, I don't know, and if this code help someone come up with a new idea, please share it If you are redirecting to action2 at the end of action1, just append the value to the end of the redirect: my_var = <some logic> redirect_to :action => 'action2', :my_var => my_var on the same thread another user proposes the folowing: def action1 redirect_to :action => 'action2', :value => params[:current_varaible] end def action2 puts params[:value].inspect end source: http://www.ruby-forum.com/topic/134953 Can something like this work in Sinatra? Thanks

    Read the article

  • What does the `new` keyword do

    - by Mike
    I'm following a Java tutorial online, trying to learn the language, and it's bouncing between two semantics for using arrays. long results[] = new long[3]; results[0] = 1; results[1] = 2; results[2] = 3; and: long results[] = {1, 2, 3}; The tutorial never really mentioned why it switched back and forth between the two so I searched a little on the topic. My current understanding is that the new operator is creating an object of "array of longs" type. What I do not understand is why do I want that, and what are the ramifications of that? Are there certain "array" specific methods that won't work on an array unless it's an "array object"? Is there anything that I can't do with an "array object" that I can do with a normal array? Does the Java VM have to do clean up on objects initialized with the new operator that it wouldn't normally have to do? I'm coming from C, so my Java terminology, may not be correct here, so please ask for clarification if something's not understandable.

    Read the article

  • Are "strings.xml" string arrays always parsed/deserialized in the same order?

    - by PhilaPhan80
    Can I count on string arrays within the "strings.xml" resource file to be parsed/deserialized in the same order every time? If anyone can cite any documentation that clearly spells out this guarantee, I'd appreciate it. Or, at the very least, offer a significant amount of experience with this topic. Also, is this a best practice or am I missing a simpler solution? Note: This will be a small list, so I'm not looking to implement a more complicated database or custom XML solution unless I absolutely have to. <!--KEYS (ALWAYS CORRESPONDS TO LIST BELOW ??)--> <string-array name="keys"> <item>1</item> <item>2</item> <item>3</item> </string-array> <!--VALUES (ALWAYS CORRESPONDS TO LIST ABOVE ??)--> <string-array name="values"> <item>one</item> <item>two</item> <item>three</item> </string-array>

    Read the article

  • Munin on Centos 6 - missing perl MODULE_COMPAT_5.8.8

    - by André Bergonse
    I'm trying to install Munin on a new VPS through yum install munin but I keep getting an error about a missing perl module: Requires: perl(:MODULE_COMPAT_5.8.8). This is the perl version currently installed: v5.10.1. I've searched all around and still haven't found a solution for this. Here's the relevant part of the output of the installation attempt: --> Finished Dependency Resolution Error: Package: perl-Mail-Sender-0.8.13-2.el5.1.noarch (epel) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: perl-Log-Log4perl-1.13-2.el5.noarch (epel) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: perl-Mail-Sendmail-0.79-9.el5.1.noarch (epel) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: perl-Log-Dispatch-FileRotate-1.16-1.el5.noarch (epel) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: perl-Crypt-DES-2.05-3.el5.i386 (epel) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.7-5.el5.noarch (epel) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: perl-IO-Multiplex-1.08-5.el5.noarch (epel) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-common-1.4.7-5.el5.noarch (epel) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: perl-Net-Server-0.96-2.el5.noarch (epel) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: perl-Log-Dispatch-2.20-1.el5.noarch (epel) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.7-5.el5.noarch (epel) Requires: bitstream-vera-fonts Error: Package: perl-Net-SNMP-5.2.0-1.el5.1.noarch (epel) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: perl-HTML-Template-2.9-1.el5.2.noarch (epel) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: perl-IPC-Shareable-0.60-3.el5.noarch (epel) Requires: perl(:MODULE_COMPAT_5.8.8) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest

    Read the article

  • Windows 2012/IIS 8 + ASP.NET MVC Applicaiton 403.14 (Forbidden) - The Web server is configured to not list the contents

    - by WiredPrairie
    I have a very simple MVC 4 application I'm trying to deploy to a Windows 2012 server. Inconsistently, when navigating to the root of the web application (http://localhost/app), it returns a 403.14-Forbidden: Detailed Error Information: Module: DirectoryListingModule Notification: ExecuteRequestHandler Handler: StaticFile Error Code: 0x00000000 Requested URL: http://localhost:80/test1/ Physical Path: c:\apps\test1\ Logon Method: Negotiate The web application is: Is a very vanilla VS2012 MVC4 Intranet template -- with only a tweak to a label to prove things were working. runs in an Integrated v4.0 application pool setup to use Windows authentication application pool has a custom AD Identity assigned (so it can gain access to a SQL server) application pool identity has read permissions in the c:\apps\test1 folder in which it is running It's an MVC4 application, targeting .NET 4.0 currently -There's no default document in an MVC4 application (like a default.aspx), as there shouldn't need to be one. I don't want to enable directory listings (as that's not the real error). Installed: Roles / Web Server (IIS) / Appliation Development / (.NET 4.5 Extensibility, Application Initialization, ASP.NET 4.5, ISAP Extensions, ISAPI Filters, WebSocket Protocol) Works locally on my machine in IISExpress on Windows 8 Has configured in web.config: <modules runAllManagedModulesForAllRequests="true" /> is set to precompiled during publish When I change the precompiled option to false, the web application does not fail (in my testing at least, it seems to work consistently). The reason I say it's inconsistent is that I've seen it work, then I've published, and the error returns. I can't find a pattern to the issue (and right now, I haven't been able to get it work again, at all). The 403 is returned from a local or remote web browser. I've had trouble finding a solution that isn't intended for older versions of Windows (like suggestions to reinstall ASP.NET which won't work on Windows 2012). I really don't know what else to try.

    Read the article

  • Ubuntu boot hangs after message "Running /scripts/init-bottom ... done"

    - by Douglas B. Staple
    I've been trying to copy a Proxmox container based on the Ubuntu Precise Standard template to a VirtualBox VM. I am now stuck at a point where my new Ubuntu/VirtualBox VM hangs after the message "Running /scripts/init-bottom ... done" during boot. I started by installing Ubuntu Server 12.04.4 LTS on a VirtualBox VM. Ubuntu Server 12.04.4 LTS was the closest "official" Ubuntu ISO to the Proxmox container OS I could find. I installed all updates on both the Proxmox container and on the VirtualBox VM. The idea was to get same version kernal running on the ProxMox container and VirtualBox VM. sudo apt-get update ; sudo apt-get upgrade ; sudo apt-get dist-upgrade sudo reboot rsync the entire proxmox container to a temporary directory in the VirtualBox VM: cd / mkdir /tmp/backup rsync -e ssh -av --exclude={/dev,/proc,/sys,/tmp,/run,/mnt,/media,/lost+found,/boot,/selinux} root@my_proxmox_container_hostname:/ /tmp/backup Shut down the virtual machine, and boot the VM with a bootable linux image. I used the Desktop image of Ubuntu 12.04 LTS, ubuntu-12.04.4-desktop-i386.iso Drop to a root prompt. Mount the VM root filesystem: sudo mount /dev/sda1 /mnt Remove files from most of /mnt cd /mnt sudo rm -rf bin etc home lib opt sbin root usr var Move all of the files from /mnt/backup into /mnt sudo mv /mnt/tmp/backup/* /mnt Rebooted system. For me, at this point the system freezes after starting, after the message: Running /scripts/init-bottom ... done I've tried reinstalling GRUB and all manner of other thing. I am almost ready to give up.

    Read the article

  • Importing AMIs from Hyper-v VHDX

    - by jwdaigle
    I have a couple of VHDX files that we use to template locally hosted VMs. I would like to try (some) of these on Amazon, so I need to build an AMI to upload to AWS. I have found http://aws.amazon.com/ec2/vmimport/ , which is very helpful to get started. It appears that AWS does not yet support VHDX, so I found some info that told me to export it out of Hyper-V as a VHD file, and then convert/upload it, which I am in the process of trying out. But the real question is that when I was looking for info, I came across http://stackoverflow.com/questions/14346114/unable-to-rdp-to-ec2-instance . The AWS documentation seems to imply that all I need to do is run the importer and all will be well. True? I dont want to waste all the upload bandwidth and then find out it wont work. Is there something I need to install into the Hyper-V VM before converting/upload it using the AWS command line tools? EC2-Config? Any help greatly appreciated -

    Read the article

  • Good bug tracking with Sharepoint?

    - by torbengb
    At my place of work, it has been decided to move many processes to Sharepoint. I'm now looking into how Sharepoint can be used for bug tracking (à la Mantis, FogBugz etc. but within Sharepoint). Specifically, we're using a collaboration room and the solution must work inside that. I know that I can create lists using an "Issue tracker" template, but it lacks workflow, integrated correspondence (like FogBugz), and audit log (any user can edit any field any time, without it being noted anywhere). That's not sufficient, so I am looking for "bigger" solutions but haven't yet found anything at all. This question is similar but aims at Helpdesk use; we aim at bug tracking and change requests to a system. I'm open to suggestions! As I'm not an administrator, I can't just grab a Sharepoint component and install it for testing. I'm looking for experiences, documentation, white papers, screen shots -- the actual downloadable will be relevant later. Ideally, some of these matters should be covered: Support for different ticket types (bug, feature, inquiry, internal task). Configurable workflow per ticket type, no fixed number of steps. Configurable read/write permissions per field and per workflow status. Configurable dashboard for managers with nice charts. Configurable email notifications. Correspondence à la FogBugz. (Challenge: we use Notes, not Exchange.)

    Read the article

  • Strange permission errors in new PostgreSQL installation

    - by Bart van Heukelom
    A freshly installed PostgreSQL (with configuration overwritten) won't start: $ sudo service postgresql start * Starting PostgreSQL 9.1 database server * Error: could not read /etc/postgresql/9.1/main/postgresql.conf: Permission denied Looks like it should be able to read it though: $ ls -l postgresql.conf -rw------- 1 postgres postgres 19450 2012-06-14 10:07 postgresql.conf But fine, I'll add chmod +r it to test if that works. $ sudo chmod +r postgresql.conf $ sudo service postgresql start * Starting PostgreSQL 9.1 database server * The PostgreSQL server failed to start. Please check the log output: Error: Could not open log file /var/log/postgresql/postgresql-9.1-main.log Huh? $ ls -l /var/log/postgresql/ total 4 -rw-r----- 1 postgres adm 827 2012-06-14 10:07 postgresql-9.1-main.log I don't get it. What can be wrong here? This used to work before. Can I maybe monitor what process attempts to open the file? It's Ubuntu 11.10 on EC2, using Chef. For completeness, here's the recipe: # Install PostgreSQL package "postgresql-9.1" # Stop server service "postgresql" do action :stop end # Overwrite configuration (setting data dir) template "/etc/postgresql/9.1/main/postgresql.conf" do source "postgresql-conf.erb" owner "postgres" group "postgres" end # Start server service "postgresql" do action :start end

    Read the article

  • HP DL160G6 bios update fails

    - by Bojo
    I tried to update the BIOS from my HP DL160G6's. Unfortunately the Windows update degraded my BIOS version from 243 to 237. But when I try to upgrade both servers to version the update fails. First I got a warning like: CMOS Layout difference between System ROM and ROM file has detected. AFU recommand (sic) adding /C commands of your original input commands. Press "A" to accept AFU's recommendation. Press "F" to keep original input commands. And the update did not do anything. But now I get a message like: Reading flash ........ done Bootblock checksum ... ok Module checksums ..... bad Error: BIOS checksum error And the update stops. I tried some commands form this page: http://www.ami.com/support/downloads/txt/AFU_README.TXT But I don't try to much, the servers are still booting. Does anyone know how to update my servers to BIOS version 245? I used this version http://h20565.www2.hp.com/portal/site/hpsc/template.PAGE/public/psi/swdDetails/?sp4ts.oid=3884344&spf_p.tpst=swdMain&spf_p.prp_swdMain=wsrp-navigationalState%3Didx%253D%257CswItem%253DMTX_7bd12651ab954fdcb0d7ee164a%257CswEnvOID%253D54%257CitemLocale%253D%257CswLang%253D%257Cmode%253D%257Caction%253DdriverDocument&javax.portlet.begCacheTok=com.vignette.cachetoken&javax.portlet.endCacheTok=com.vignette.cachetoken and created a bootable USB stick sith HPQUSB.exe

    Read the article

  • Fetching e-mails into Redmine via IMAP

    - by Danilo Bargen
    I'm trying to fetch e-mails into Redmine via IMAP. The e-mails I'm generating look like this: FooBar Ltd 123456 http://example.com/Foobar-Ltd-123456.html Project: backend Tracker: Dataerror Beschreibung: This is the description =========================== CLIENT_IP: 192.168.1.215 HTTP_USER_AGENT: mozilla/asdfjköl I try to fetch them into Redmine via this command: rake -f /var/www/projects/redmine/Rakefile redmine:email:receive_imap \ RAILS_ENV="production" host=example.com port=993 ssl=true username=redmine \ password=1234 project=myproject tracker=other \ allow_override=project,tracker,category,priority \ move_on_success=read move_on_failure=failed But the e-mails get moved into the failed folder. I had this setup running some time ago with a different e-mail generator but pretty much the same template, and I can't figure out why it's not working. The permissions seem to be OK too. In order to further debug this issue, I need some logfiles. Are there any logfiles written by this command? Or are there any other suggestions to solve this issue? My environment: danilo@jabba:/var/www/projects/redmine$ RAILS_ENV=production script/about About your application's environment Ruby version 1.8.7 (i486-linux) RubyGems version 1.3.5 Rack version 1.0 Rails version 2.3.5 Active Record version 2.3.5 Active Resource version 2.3.5 Action Mailer version 2.3.5 Active Support version 2.3.5 Application root /var/www/projects/redmine Environment production Database adapter mysql Database schema version 20100819172912

    Read the article

  • Nginx unknown limit_req_zone

    - by Kayle
    Nginx currently will not start due to the error mentioned in the title. Here's the actual error I'm getting: $ sudo /etc/init.d/nginx restart Restarting nginx: nginx: [emerg] unknown limit_req_zone "one" in /etc/nginx/sites-enabled/www.myhashimotosthyroiditis.com:15 nginx: configuration file /etc/nginx/nginx.conf test failed And this is immediately following creating the VM in question (www.myhashimotosthyroiditis.com), using a template I found here that was supposedly the "out-of-the-box-for-lazy-people" templates. I'm very new to Nginx and I could not find any helpful information via google or searching here, so I beg my pardon if this is a product of stupidity. Here is the entirety of the VM file: server { listen 80; server_name myhashimotosthyroiditis.com www.myhashimotosthyroiditis.com; root /var/www/myhashimotosthyroiditis; access_log /var/log/nginx/myhashimotosthyroiditis.access.log; error_log /var/log/nginx/myhashimotosthyroiditis.error.log; location / { try_files $uri $uri/ /index.php; } location /search { limit_req zone=one burst=3 nodelay; rewrite ^ /index.php; } fastcgi_intercept_errors off; location ~* \.(?:ico|css|js|gif|jpe?g|png)$ { expires max; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } include php.conf; # You may want to remove the robots line from drop to use a virtual robots.txt # or create a drop_wp.conf tailored to the needs of the wordpress configuration include drop.conf; }

    Read the article

  • therubyracer on Ubuntu install issues with ruby-2.0.0-p247

    - by Victor S
    I can't seem to get therubyracer gem installed on ubuntu, i get the following errors, anyone can help? ERROR: Error installing therubyracer: ERROR: Failed to build gem native extension. /home/victorstan/.rvm/rubies/ruby-2.0.0-p247/bin/ruby extconf.rb checking for main() in -lpthread... yes creating Makefile make compiling constraints.cc compiling array.cc compiling value.cc compiling invocation.cc compiling primitive.cc compiling trycatch.cc compiling context.cc compiling exception.cc compiling template.cc compiling accessor.cc compiling object.cc compiling script.cc compiling external.cc compiling stack.cc compiling gc.cc compiling backref.cc compiling heap.cc compiling v8.cc compiling constants.cc compiling date.cc compiling function.cc compiling rr.cc compiling message.cc compiling init.cc compiling string.cc compiling handles.cc compiling signature.cc compiling locker.cc linking shared-object v8/init.so g++: /home/victorstan/.rvm/gems/ruby-2.0.0-p247@global/gems/libv8-3.11.8.17-x86_64-linux/vendor/v8/out/ia32.release/obj.target/tools/gyp/libv8_base.a: No such file or directory g++: /home/victorstan/.rvm/gems/ruby-2.0.0-p247@global/gems/libv8-3.11.8.17-x86_64-linux/vendor/v8/out/ia32.release/obj.target/tools/gyp/libv8_snapshot.a: No such file or directory make: *** [init.so] Error 1 Update It seems it's trying to use a 32 bit library instead of a 64 bit library, anyone else experience issue trying to install 64 bit applications and packages on Ubuntu, and it trying to install 32 bit version instead (even though the system is a 64 Linux)? Related: https://github.com/cowboyd/therubyracer/issues/262

    Read the article

  • Excel techniques for perfmon csv log file analysis

    - by Aszurom
    I have perfmon running against several servers, where I'm outputting to a .csv file data like CPU %time, memory bytes free, hard disk I/O metrics like s/write and writes/s. The ones graphing the SQL servers are also collecting SQL stats. The web servers are collecting .Net relevant stuff. I am aware of PAL, and used it as a template of what data to capture based on server type actually. I just don't think the output it generates is detailed or flexible enough - but it does a pretty remarkable job of parsing logs and making graphs. I'm borderline incompetent with Excel, so I'm hoping to be directed to some knowledge of how to take a perfmon output .csv and mine it in Excel to produce some numbers that are meaningful to me as a sysadmin. I could of course just pick a range of data and assemble a graph out of that and look for spikes and trends, but I'm convinced there is some technique to this that makes it more manageable than looking at a monsterous spreadsheet of numbers and trying to make graphs of it. Plus, it's pretty time consuming and not something I can do as a "take a glance at the servers" sort of routine. I'm graphing CPU, disk use, network b/sec, etc. in Cacti as well, which is nice for seeing big trends. The problem is that it is 5 minute averages, so a server could have a problem but it's intermittent and washes out in a 5 min average. What do you do with perfmon data that I could learn from?

    Read the article

  • "Unable to open MRTG log file" error with nagios and mrtg

    - by Simone Magnaschi
    We have a strange issue with our setup of icinga / nagios and mrtg. Icinga is working great and has no problem, it can monitor basically everything without issues. We setup mrtg to gather bandwith data from our routers and switches. MRTG is working fine: it stores the log data in the /var/www/mrtg/ directory and displays the graph data via web. We assume so MRTG is doing great. We tried to setup bandwidth checks in nagios: define service{ use generic-service ; Inherit values from a template host_name zywall-agora service_description ZYWALL AGORA TRAFFICO check_command check_local_mrtgtraf!/var/www/mrtg/x.x.x.x_2.log!AVG!1000000,2000000!5000000,5000000!1000 check_interval 1 ; Check the service every 1 minute under normal conditions retry_interval 1 ; Re-check every minute until its final/hard state is determined } Where /var/www/mrtg/x.x.x.x_2.log is the correct log path file. We keep on getting Unable to open MRTG log file error in the test result in icinga web interface. We tried everything: give ownership to user nagios or icinga to the log file give chmod 777 to the file try to copy the file in another directory and give it full permission Same error. The strange thing is that if we use the command that nagios generate in a bash session the command works like a charm: /usr/lib64/nagios/plugins/check_mrtgtraf -F /var/www/mrtg/x.x.x.x_2.log -a AVG -w 10,20 -c 5000000,5000000 -e 10 Result: Traffic WARNING - Avg. In = 17.9 KB/s, Avg. Out = 5.0 KB/s|in=17.877930KB/s;10.000000;5000000.000000;0.000000 out=5.000000KB/s;20.000000;5000000.000000;0.000000 We ran that command line as root, as user nagios and as user icinga and all three worked ok. We thought that the command that nagios perform maybe has something wrong in it, so we debugged nagios but we found out that the generated command from nagios is the same as above. Searching on google for these kind of problem returns only issues of systems where mrtg is not installed or issues with the wrong path to the log file, but these seems not to be our case. We are stuck, can somebody help?

    Read the article

  • SSL Certifcate Request s2003 DC CA DNS Name not Avaiable.

    - by Beuy
    I am trying to submit a request for an SSL certificate on a Domain Controller in order to enable LDAP SSL, and having no end of problems. I am following the information provided at http://support.microsoft.com/default.aspx?scid=kb;en-us;321051 & http://adldap.sourceforge.net/wiki/doku.php?id=ldap_over_ssl Steps taken so far: Create Servername.inf with the following information ;----------------- request.inf ----------------- [Version] Signature="$Windows NT$ [NewRequest] Subject = "CN=servername.domain.loc" ; replace with the FQDN of the DC KeySpec = 1 KeyLength = 1024 ; Can be 1024, 2048, 4096, 8192, or 16384. ; Larger key sizes are more secure, but have ; a greater impact on performance. Exportable = TRUE MachineKeySet = TRUE SMIME = False PrivateKeyArchive = FALSE UserProtected = FALSE UseExistingKeySet = FALSE ProviderName = "Microsoft RSA SChannel Cryptographic Provider" ProviderType = 12 RequestType = PKCS10 KeyUsage = 0xa0 [EnhancedKeyUsageExtension] OID=1.3.6.1.5.5.7.3.1 ; this is for Server Authentication ;----------------------------------------------- Create Certificate request by running: certreq -new Servername.inf Servername.req Attempt to submit Certificate request to CA by running: certreq -submit -attrib "CertificateTemplate: DomainController" request.req At which point I get the following error: The DNS name is unavailable and cannot be added to the Subject Alternate Name. 0x8009480f (-2146875377) Trouble shooting steps I have taken so far 1. Modify the Domain Controller Template to supply Subject Name in Request restart Certificate Service, include SAN in Request, same error. 2. Re-installed Certificate Services / IIS / Restarted machine countless times Any help resolving the issue would be greatly appreciated.

    Read the article

  • Bridging Virtual Networking into Real LAN on a OpenNebula Cluster

    - by user101012
    I'm running Open Nebula with 1 Cluster Controller and 3 Nodes. I registered the nodes at the front-end controller and I can start an Ubuntu virtual machine on one of the nodes. However from my network I cannot ping the virtual machine. I am not quite sure if I have set up the virtual machine correctly. The Nodes all have a br0 interfaces which is bridged with eth0. The IP Address is in the 192.168.1.x range. The Template file I used for the vmnet is: NAME = "VM LAN" TYPE = RANGED BRIDGE = br0 # Replace br0 with the bridge interface from the cluster nodes NETWORK_ADDRESS = 192.168.1.128 # Replace with corresponding IP address NETWORK_SIZE = 126 NETMASK = 255.255.255.0 GATEWAY = 192.168.1.1 NS = 192.168.1.1 However, I cannot reach any of the virtual machines even though sunstone says that the virtual machine is running and onevm list also states that the vm is running. It might be helpful to know that we are using KVM as a hypervisor and I am not quite sure if the virbr0 interface which was automatically created when installing KVM might be a problem.

    Read the article

  • Installing Munin on Centos 6

    - by justinhj
    I've hit problems installing munin on Centos 6. This seems to be a conflict between parts of Perl. I think the version of Perl is newer on Centos 6 (v5.10.1) When installing munin via yum I get errors relating to perl dependencies as below. I'm not a big enough whiz at yum or rpm to figure out the issue. Munin documentation does not yet talk about installing to Centos 6.0 Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(Net::SNMP) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: bitstream-vera-fonts Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(HTML::Template) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-SNMP Error: Package: munin-common-1.4.2-0.rpl1.el5.noarch (/munin-common-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(DBI) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Log::Log4perl) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(LWP::Simple) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(RRDs) Error: Package: munin-node-1.4.2-0.rpl1.el5.noarch (/munin-node-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-Server Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Date::Manip) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(:MODULE_COMPAT_5.8.8) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl-Net-Server Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(CGI::Fast) Error: Package: munin-1.4.2-0.rpl1.el5.noarch (/munin-1.4.2-0.rpl1.el5.noarch) Requires: perl(Time::HiRes)

    Read the article

  • Cacti not working for SNMP data sources

    - by lorenzo-s
    I installed packages cacti and snmpd on a Debian server. I'm able to display common graphs in Cacti (such as memory usage, load average, logged in users, etc) using the data templates listed as Unix. Now I want to replace these graphs with new ones using SNMP data sources, because I see there is also CPU usage and because it's not excluded I have to manage multiple hosts in the future. So, I installed snmpd on the machine and left the snmpd.conf as it is. In Cacti, I created three new data sources from SNMP templates for 127.0.0.1 host: ucd/net - CPU Usage - Nice ucd/net - CPU Usage - System ucd/net - CPU Usage - User Then I created a new graph from template ucd/net - CPU Usage, and select the three data sources in the Graph Item Fields section. Graph is now enabled and running, but empty. No data have been collected. Under Console - Devices my SNMP host is listed as up and running: System:Linux ip-xx-xx-xxx-xxx 3.2.0-23-virtual #36-Ubuntu SMP Tue Apr 10 22:29:03 UTC 2012 x86_64 Uptime: 929267 (0 days, 2 hours, 34 minutes) Hostname: ip-xx-xx-xxx-xxx Location: Sitting on the Dock of the Bay Contact: Me [email protected] In SNMP Options I left all as it is: SNMP Version: Version 1 SNMP Community: public SNMP Timeout: 500 ms Maximum OID's Per Get Request: 10 In Console - Utilities - Cacti Log I have multiple warning (two for each data source) every 5 minutes: 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[2] DS[18] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.15.0' 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] Host[1] DS[9] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:45:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.11.52.0' 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] Host[2] DS[19] WARNING: Result from SNMP not valid. Partial Result: U 10/29/2012 01:40:01 PM - CMDPHP: Poller[0] WARNING: SNMP Get Timeout for Host:'127.0.0.1', and OID:'.1.3.6.1.4.1.2021.4.6.0' [...] I have the feeling I'm missing something, but I cannot get it...

    Read the article

  • How do I add additional parameters to query string of a Firefox Search Plugin?

    - by Goto10
    I have just installed the DuckDuckGo add-on in Firefox 11.0, running on XP SP 3. I would like to add additional parameters to the query string. However, any changes I make are not reflected in the query string when doing a search. I found the duckduckgo.xml file at C:\Documents and Settings\User Name\Application Data\Mozilla\Firefox\Profiles\Profile Name.default\searchplugins. I opened it up with Notepad++ and added the line for kl=uk-en: <SearchPlugin xmlns="http://www.mozilla.org/2006/browser/search/" xmlns:os="http://a9.com/-/spec/opensearch/1.1/"> <os:ShortName>DuckDuckGo</os:ShortName> <os:Description>Search DuckDuckGo (SSL)</os:Description> <os:InputEncoding>UTF-8</os:InputEncoding> <os:Image width="16" height="16">data:image/x-icon;base64, -Removed to shorten-</os:Image> <os:Url type="text/html" method="GET" template="https://duckduckgo.com/"> <os:Param name="q" value="{searchTerms}"/> <os:Param name="kl" value="uk-en"/> </os:Url> </SearchPlugin> However, the kl=uk-en parameter does not appear in the query string when searching (despite several Firefox restarts).

    Read the article

  • Puppet: array in parameterized classes VS using resources

    - by Luke404
    I have some use cases where I want to define multiple similar resources that should end up in a single file (via a template). As an example I'm trying to write a puppet module that will let me manage the mapping between MAC addresses and network interface names (writing udev's persistent-net-rules file from puppet), but there are also many other similar usage cases. I searched around and found that it could be done with the new parameterised classes syntax: if implemented that way it should end up being used like this: node { "myserver.example.com": class { "network::iftab": interfaces => { "eth0" => { "mac" => "ab:cd:ef:98:76:54" } "eth1" => { "mac" => "98:76:de:ad:be:ef" } } } } Not too bad, I agree, but it would rapidly explode when you manage more complex stuff (think network configurations like in this module or any other multiple-complex-resources-in-a-single-config-file stuff). In a similar question on SF someone suggested using Pienaar's puppet-concat module but I doubt it could get any better than parameterised classes. What would be really cool and clean in the configuration definition would be something like the included host type, it's usage is simple, pretty and clean and naturally maps to multiple resources that will end up being configured in a single place. Transposed to my example it would be like: node { "myserver.example.com": interface { "eth0": "mac" => "ab:cd:ef:98:76:54", "foo" => "bar", "asd" => "lol", "eth1": "mac" => "98:76:de:ad:be:ef", "foo" => "rab", "asd" => "olo", } } ...that looks much better to my eyes, even with 3x options to each resource. Should I really be passing arrays to parameterised classes, or there is a better way to do this kind of stuff? Is there some accepted consensus in the puppet [users|developers] community? By the way, I'm referring to the latest stable release of the 2.7 branch and I am not interested in compatibility with older versions.

    Read the article

  • Run script before shutdown/restart

    - by dtbarne
    I'd like to run a PHP script when an instance is told to shutdown, but of course before it actually finishes shutting down. My particular script is just looking to push some log files from the local partition to a another server. I've got the gist of how this process works, but I need some clarification. How I understand it. Please correct me if I'm wrong. Create an executable script in /etc/init.d (lets call it /etc/init.d/push-logs) Create a symlink to /etc/init.d/push-logs from /etc/rc0.d (shutdown) and /etc/rc6.d (reboot). The name should be KXXpush-logs Here's my questions: Of course - am I understanding correctly? For #2 above - it sounds like the lower the XX the better - is there too low a number I can use? Does it matter if it shares a number with another script? Does the script in /etc/init.d/push-logs HAVE to follow the standard init.d template (supporting start/stop, etc. commands)? This doesn't really apply to my use case. If possible I just want the script to be the following: #!/bin/sh # # Run PHP file prior to shutdown # /usr/bin/php /path/to/php_file.php

    Read the article

  • Convert HTACCESS mod_rewrite directives to nginx format?

    - by Chris
    I'm brand new to nginx and I am trying to convert the app I wrote over from Apache as I need the ability to serve a lot of clients at once without a lot of overhead! I'm getting the hang of setting up nginx and FastCGI PHP but I can't wrap my head around nginx's rewrite format just yet. I know you have to write some simple script that goes in the server {} block in the nginx config but I'm not yet familiar with the syntax. Could anyone with experience with both Apache and nginx help me convert this to nginx format? Thanks! # ------------------------------------------------------ # # Rewrite from canonical domain (remove www.) # # ------------------------------------------------------ # RewriteCond %{HTTP_HOST} ^www.domain.com RewriteRule (.*) http://domain.com/$1 [R=301,L] # ------------------------------------------------------ # # This redirects index.php to / # # ------------------------------------------------------ # RewriteCond %{THE_REQUEST} ^[A-Z]+\ /(index|index\.php)\ HTTP/ RewriteRule ^(index|index\.php)$ http://domain.com/ [R=301,L] # ------------------------------------------------------ # # This rewrites 'directories' to their PHP files, # # fixes trailing-slash issues, and redirects .php # # to 'directory' to avoid duplicate content. # # ------------------------------------------------------ # RewriteCond %{DOCUMENT_ROOT}/$1.php -f RewriteRule ^(.*)$ $1.php [L] RewriteCond %{DOCUMENT_ROOT}/$1.php -f RewriteRule ^(.*)/$ http://domain.com/$1 [R=301,L] RewriteCond %{THE_REQUEST} ^[A-Z]+\ /[^.]+\.php\ HTTP/ RewriteCond %{DOCUMENT_ROOT}/$1.php -f RewriteRule ^([^.]+)\.php$ http://domain.com/$1 [R=301,L] # ------------------------------------------------------ # # If it wasn't redirected previously and is not # # a file on the server, rewrite to image generation # # ------------------------------------------------------ # RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^([a-z0-9_\-@#\ "'\+]+)/?([a-z0-9_\-]+)?(\.png|/)?$ generation/image.php?user=${escapemap:$1}&template=${escapemap:$2} [NC,L]

    Read the article

< Previous Page | 321 322 323 324 325 326 327 328 329 330 331 332  | Next Page >