Search Results

Search found 28756 results on 1151 pages for 'op amp'.

Page 533/1151 | < Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >

  • few basic questions on webhosting (namservers & dns records)

    - by claws
    I bought a domain name on name.com & I want to use free webhosting on 110mb.com By default name.com integrates services of Google apps. Name server entries are ns1.name.com ns2.name.com ns3.name.com ns4.name.com When I registered on 110mb.com it gave me two addresses ns1.110mb.com ns2.110mb.com This is where I'm lost. The concept is that "Domain name should point to an address of the server where the website is hosted" right? Then why are these 4 entires by default. How exactly is it working? should I remove these 4 and then add 110mb.com servers or just append 110mb.com server addresses to name.com ones. I would like to use google apps. If I change these name server addresses would that remove google apps? I especially want to use email service of google. And I really don't understand what is CNAME, MX, or something something. I want to learn about these stuff & how it exactly works. When I search for webhost tutorial. I'm unable to find any fruitful results.

    Read the article

  • Storing secure keys on Ubuntu web server

    - by Sencha
    I'm running Ubuntu 12.04 Precise with a DUNG (Django, Unix, Nginx & Gunicorn) environment and my app (as well as various config files) is stored in a python virtual environment inside /srv, which the www-data user has access to. The nginx & gunicorn processes are all run as www-data. My web app requires secure credentials which I am storing in an environment.sh file. This file contains various exports and is run using source before the gunicorn processes execute. My concern is the location of the environment.sh file and it's permissions. Will it be okay storing this file inside the /srv folder where the www-data has access to it? Or should it be stored and owned by root somewhere else such as /var/myapp/environment.sh? Also, regarding the www-data user, if any of my web processes (which are run as www-data) are compromised and someone gains access to them, does that mean that the user could potentially read any file on the system, even if they can't write? Including my secure keys?

    Read the article

  • How can I avoid permission denied errors when attempting to deploy a rails app with capistrano?

    - by joshee
    Total noob here. I'm attempting to deploy an app through Capistrano. I'm getting relentless permission denied errors when I attempt to run cap deploy:update. Seemingly at least some of these errors are due to missing directories that trigger a "Permission Denied" error. (I'm doing setup on root just temporarily.) set :user, 'root' set :domain, 'domainname.com' set :application, 'appname' # adjust if you are using RVM, remove if you are not $:.unshift(File.expand_path('./lib', ENV['rvm_path'])) require "rvm/capistrano" set :rvm_ruby_string, '1.9.2' # file paths set :repository, "ssh://[email protected]/~/git/appname.git" set :deploy_to, "/var/rails/appname" # distribute your applications across servers (the instructions below put them # all on the same server, defined above as 'domain', adjust as necessary) role :app, domain role :web, domain role :db, domain, :primary => true set :deploy_via, :remote_cache set :scm, 'git' set :branch, 'master' set :scm_verbose, true set :use_sudo, false set :rails_env, :production namespace :deploy do desc "cause Passenger to initiate a restart" task :restart do run "touch #{current_path}/tmp/restart.txt" end desc "reload the database with seed data" task :seed do run "cd #{current_path}; rake db:seed RAILS_ENV=#{rails_env}" end end after "deploy:update_code", :bundle_install desc "install the necessary prerequisites" task :bundle_install, :roles => :app do run "cd #{release_path} && bundle install" end Here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/shared/cached-copy'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly I'm able to ssh without a password, so not sure about that publickey error. By the way, if I run cap deploy:update without set :deploy_via, :remote_cache, here's my result: ** [domainname.com :: out] Cloning into '/var/rails/appname/releases/20120326204237'... ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied, please try again. ** [domainname.com :: err] Permission denied (publickey,gssapi-with-mic,password). ** [domainname.com :: err] fatal: The remote end hung up unexpectedly command finished Thanks a lot for your help with this.

    Read the article

  • Mount an VHD on Mac OS X

    - by janm
    Is it possible (how) to mount an VHD file created by Windows 7 in OS X? I found some information about how to do this on linux. There is a fuse fs "vdfuse" which uses virtualbox libs to mount filesystems supported by virtualbox. However I was unable to compile the package on osx because nearly all headers are missing and I doubt that it would work anyway... EDIT #2: Okay I got my hands dirty and finally compiled vdfuse (http://forums.virtualbox.org/viewtopic.php?f=26&t=33355&start=0) on osx. As a starting point I used macfuse (http://code.google.com/p/macfuse/) and looked at the example file systems. This led me to the following build script infile=vdfuse.c outfile=vdfuse incdir="your/path/to/vbox/headers" INSTALL_DIR="/Applications/VirtualBox.app/Contents/MacOS" CFLAGS="-pipe" gcc -arch i386 "${infile}" \ "${INSTALL_DIR}"/VBoxDD.dylib \ "${INSTALL_DIR}"/VBoxDDU.dylib \ "${INSTALL_DIR}"/VBoxVMM.dylib \ "${INSTALL_DIR}"/VBoxRT.dylib \ "${INSTALL_DIR}"/VBoxDD2.dylib \ "${INSTALL_DIR}"/VBoxREM.dylib \ -o "${outfile}" \ -I"${incdir}" -I"/usr/local/include/fuse" \ -Wl,-rpath,"${INSTALL_DIR}" \ -lfuse_ino64 \ -Wall ${CFLAGS} You actually don't need to compile VirtualBox on your machine, just install a recent version of VirtualBox. So now I can partially mount vhds. The separate partitions appear as block files Partition1, Partition2, ... on my mount point. However Mac OS X does not include a loopback file system and macfuse's loopback fs does not work with block files, so we need a loopback fs to mount the blockfiles as actual partitions.

    Read the article

  • Opera's problem with magnet-links

    - by Dmitriy Matveev
    I've encountered following problem while using Opera web browser: When I click on a magnet link on some web page like magnet:?xt=urn:tree:tiger:CXW6MJFRNOEFU2STCBWWOIYZLVCR2FTR37SQCXY&xl=352342016&dn=ER%20-%207x16%20-%20Witch%20Hunt.avi Opera is asking me if I want to open that link with my DC++ client. If I click 'Yes' button then my DC++ client is correctly "opening" the clicked magnet link and performing some action on it. There is also an option "Do not show this dialog again" in Opera's dialog, but it doesn't seem to be working correctly. If I check that option before answering 'Yes' and then click on other magnet link of the same kind the Opera will again ask me about how to open new link. I haven't found protocol association in 'Control Panel Default Programs Set Associations' part of my Windows Vista settings, but if I paste magnet link in "Run" dialog then Vista will handle that link perfectly. I've tried to find out how to manually set protocol association in Opera and found 'Programs' page of browser's advanced settings. There I discovered that instead of storing protocol to application associations Opera tries to store per-link associations (There are several entries with exact links as they was on web page as value of protocol field). If I click on the links which are already stored in Opera's protocols associations browser will ask me about them again. I haven't found any information on how to resolve this problem on the internet, maybe someone on this site will be able to help me.

    Read the article

  • Computer Won't Boot Properly, unless in safe mode?

    - by Mr_CryptoPrime
    I bought a computer today and booted it up, but when I did I only got a blank screen. I checked to make sure it wasn't the monitor by connecting it to my old computer...it worked. I then tried connecting my monitor to both DVI ports and found that the bottom one did work. However, now it just boots up and says "loading windows" and then when the login screen is suppose to come up the screen just goes blank and monitor says "no input, check cord" (or something like that). I tried reinstalling windows and then I was able to log on normally. I used the CD's and reinstalled all the drivers, then rebooted...now I am stuck right back where I started. I tried taking out the RAM and inserting into different slots, that didn't fix anything. I was able to boot up into windows using safe-mode. I suspected that my ATI Radeon 6950 was the issue and downloaded the drivers, but I can't install them on safe-mode. Someone said to install C++ distr. and I tried doing that to fix driver installation problem of "failed to load detection driver" but it wouldn't let me do that either. Please someone help me, I don't want to have to deal with the evil redtape of sending it back to get a replacement! My computer: -Content--text-_-"http://www.newegg.com/Product/Product.aspx?Item=N82E16883229236&nm_mc=TEMC-RMA-Approvel&cm_mmc=TEMC-RMA-Approvel--Content--text-_- Driver detection problem: http://www.hardwareheaven.com/hardwareheaven-tools-discussion/174912-failed-load-detection-driver-installation-error.html Driver download page: http://sites.amd.com/us/game/downloads/Pages/radeon_win7-64.aspx#1 I am using windows 7. Thanks again.

    Read the article

  • External HDD incorrectly detected as internal - how change to enable hot swap/eject?

    - by Sam
    Hi All, I have win 7 x64 Home Prem. The HDD is a seagate barracuda, 7200.7 ST3120827AS. 3.5", Serial: 3ms006n6, Firmware: 3.42 (no further updates) NexStar CX External case (drivers installed). I have three drives: WD320 with OS installed WD750 data storage (internal) seagate 120 (external) - connected via esata board connected to sata on motherboard (MSI p43 neo) Tried uninstalling HDD in device manager to no effect. Also the internal WD750 is detected as an external drive and win taskbar icon allows for it to be ejected (unlike the seagate). All drives are configured - Online, Simple, Basic, NTFS, Active, Primary Partition (except c drive). The seagate was previously used as a primary disk with XP operating system so I deleted the volume and created/reformatted (not quick). HDD is no longer "Active". But did not fix problem. Background Originally, I installed win 7 with the bios set to IDE and forgot to install the chipset drivers. Then I changed win 7 to install the AHCI drivers, changed the bios to AHCI and rebooted. Win 7 loaded drivers but WD HDD gave problems/crashed. I installed chipset drivers and latest intell storage matrix software thingie (in safe mode). Everything worked fine after that except for the problem of not corrrectly detecting the external drive] I have noticed that under the driver properties (and similarly in the registry) the two drives are configured differently (e.g. in driver details property capabilities for the WD the value is set to 0000006, CM_DEVCAP_REMOVABLE & EJECTSUPPORTED - whereas the seagate shows 0000080 & CM_DEVCAP_SURPRISEREMOVALOK). Any easy way to configure things? I tried physically swapping the sata connections on the mainboard without success So far I have found that a solution to my problem might be to perform some reg changes: http://superuser.com/questions/12955/how-do-i-remove-the-option-to-eject-sata-drives-from-the-windows-7-tray-icon

    Read the article

  • Upgrading only certain packages via the getdeb repo

    - by intuited
    I'm a bit confused about how getdeb.net works now. The last time I got a package from there was a while ago; at that point the procedure was that you would just download a .deb for each package that you wanted to install/upgrade and then install it using dpkg -i. However the inexorable march of progress has lent its trumpets to this system as well, and getdeb installs are now done via their repo, which is registered with apt in /etc/apt/sources.list.d, after you install a single package that makes the changes to the apt database. I've installed that package, and I've discovered that aptitude dist-upgrade now wants to upgrade a lot of packages on my system that weren't ready for upgrades prior to the installation of the getdeb package. If I rename the file /etc/apt/sources.list.d/getdeb.list to something with a different extension, then do aptitude update && aptitude dist-upgrade, it stops wanting to upgrade packages. So I gather that the default behaviour is now to upgrade all packages to the version available at getdeb. This is not particularly appropriate, since these packages are not as well tested as the officially released versions. Is there a config setting somewhere that will prevent upgrading packages to versions from the getdeb repo unless this action is specifically selected? I'd like to be able to pick and choose what packages are upgraded via getdeb.

    Read the article

  • PHP / SSH2 Multi-threading

    - by Asad Moeen
    I'm basically done using SSH2 with PHP. Some may already that while using it, the PHP code actually waits for all the listed commands to be executed in SSH and when everything is done, it then gives back the results. Where that is fine for the work I am doing, but I need some commands to be multi-threaded. $cmd= MyCommand; echo $ssh-exec($cmd); So I just want this to run in parallel 2 times. I googled some stuff but didn't get along with it. For a basic thing, I came across to this way posted by someone but it didn't work out for me. for ($i = 0; $i < 2; $i += 1) { exec("php test_1.php $i > test.txt &"); //this will execute test_1.php and will leave this process executing in the background and will go to next iteration of the loop immediately without waiting the completion of the script in the test_1.php , $i is passed as argument . } I tried to put it this way exec("echo $ssh-exec($cmd) $i test.txt &"); in the loop but either it never entered the loop or the echo $ssh-exec failed. I don't really need a very neat multi-threading. Even a single second delay would do good, thank you.

    Read the article

  • Exadata?????????INSERT?UPDATE

    - by Liu Maclean(???)
    Hybrid Columnar Compression??????Exadata?????????????,??????????(advanced compression)??,Hybrid columnar compression (HCC) ???Exadata????????HCC???????????CU(compression unit?????),??CU??????????,?????????????????????????,???CU????block??????????????? ???????INSERT/UPDATE??,??????????????,????UPDATE/INSERT???HCC?????????????????? hybrid columnar compression???????????????(bulk initial load)??,??????(direct load)??ALTER TABLE MOVE, IMPDP???????(append INSERT),??HCC??????????????????????? ???????????????????,?????????CU????????? ??????????????HCC?????????????for OLTP?????? ????????: SQL*Plus: Release 11.2.0.2.0 Production on Wed Sep 12 06:14:53 2012 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - Production With the Partitioning, Automatic Storage Management, OLAP, Data Mining and Real Application Testing options SQL> grant dba to scott; Grant succeeded. SQL> conn scott/oracle Connected. SQL> SQL> create table hcc_maclean tablespace users compress for query high as select * from dba_objects; Table created. 1* select rowid,owner,object_name,dbms_rowid.rowid_block_number(rowid) from hcc_maclean where owner='MACLEAN' SQL> / ROWID OWNER OBJECT_NAME DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID) ------------------------------ ------------------------------ -------------------- ------------------------------------ AAAThuAAEAAAHTJAOI MACLEAN SALES 29897 AAAThuAAEAAAHTJAOJ MACLEAN MYCUSTOMERS 29897 AAAThuAAEAAAHTJAOK MACLEAN MYCUST_ARCHIVE 29897 AAAThuAAEAAAHTJAOL MACLEAN MYCUST_QUERY 29897 AAAThuAAEAAAHTJAOh MACLEAN COMPRESS_QUERY 29897 AAAThuAAEAAAHTJAOi MACLEAN UNCOMPRESS 29897 AAAThuAAEAAAHTJAOj MACLEAN CHAINED_ROWS 29897 AAAThuAAEAAAHTJAOk MACLEAN COMPRESS_QUERY1 29897 8 rows selected. select dbms_rowid.rowid_block_number(rowid),dbms_rowid.rowid_relative_fno(rowid) from hcc_maclean where owner='MACLEAN'; session A: update hcc_maclean set OBJECT_NAME=OBJECT_NAME||'DBM' where rowid='AAAThuAAEAAAHTJAOI'; session B: update hcc_maclean set OBJECT_NAME=OBJECT_NAME||'DBM' where rowid='AAAThuAAEAAAHTJAOJ'; SQL> select sid,wait_event_text,BLOCKER_SID from v$wait_chains; SID WAIT_EVENT_TEXT BLOCKER_SID ---------- ---------------------------------------------------------------- ----------- 13 enq: TX - row lock contention 136 136 SQL*Net message from client ????session A block B,????HCC???update row??CU?????CU?????? SQL> alter system checkpoint; System altered. SQL> / System altered. SQL> alter system dump datafile 4 block 29897 2 ; Block header dump: 0x010074c9 Object id on Block? Y seg/obj: 0x1386e csc: 0x00.1cad7e itc: 3 flg: E typ: 1 - DATA brn: 0 bdba: 0x10074c8 ver: 0x01 opc: 0 inc: 0 exflg: 0 Itl Xid Uba Flag Lck Scn/Fsc 0x01 0xffff.000.00000000 0x00000000.0000.00 C--- 0 scn 0x0000.001cabfa 0x02 0x000a.00a.00000430 0x00c051a7.0169.17 ---- 1 fsc 0x0000.00000000 0x03 0x0000.000.00000000 0x00000000.0000.00 ---- 0 fsc 0x0000.00000000 avsp=0x14 tosp=0x14 r0_9ir2=0x0 mec_kdbh9ir2=0x0 76543210 shcf_kdbh9ir2=---------- 76543210 flag_9ir2=--R----- Archive compression: Y fcls_9ir2[0]={ } 0x16:pti[0] nrow=1 offs=0 0x1a:pri[0] offs=0x30 block_row_dump: tab 0, row 0, @0x30 tl: 8016 fb: --H-F--N lb: 0x2 cc: 1 ==>??CU??ITL 0x02 nrid: 0x010074ca.0 col 0: [8004] Compression level: 02 (Query High) Length of CU row: 8004 kdzhrh: ------PC CBLK: 1 Start Slot: 00 NUMP: 01 PNUM: 00 POFF: 7984 PRID: 0x010074ca.0 CU header: CU version: 0 CU magic number: 0x4b445a30 CU checksum: 0xf8faf86e CU total length: 8694 CU flags: NC-U-CRD-OP ncols: 15 nrows: 995 algo: 0 CU decomp length: 8487 len/value length: 100111 row pieces per row: 1 num deleted rows: 1 deleted rows: 904, START_CU: ????????????row?????: SQL> select DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT','HCC_MACLEAN','AAAThuAAEAAAHTJAOk') from dual; DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT','HCC_MACLEAN','AAATHUAAEAAAHTJAOK' -------------------------------------------------------------------------------- 4 COMP_NOCOMPRESS CONSTANT NUMBER := 1;COMP_FOR_OLTP CONSTANT NUMBER := 2;COMP_FOR_QUERY_HIGH CONSTANT NUMBER := 4;COMP_FOR_QUERY_LOW CONSTANT NUMBER := 8;COMP_FOR_ARCHIVE_HIGH CONSTANT NUMBER := 16;COMP_FOR_ARCHIVE_LOW CONSTANT NUMBER := 32; COMP_RATIO_MINROWS CONSTANT NUMBER := 1000000;COMP_RATIO_ALLROWS CONSTANT NUMBER := -1; ?????????????,??COMP_FOR_QUERY_HIGH?4,COMP_FOR_QUERY_LOW ?8 ?????????GET_COMPRESSION_TYPE??rowid????????4?????COMP_FOR_QUERY_HIGH????: SQL> update hcc_maclean set OBJECT_NAME=OBJECT_NAME||'DBM' where owner='MACLEAN'; 8 rows updated. SQL> commit; Commit complete. SQL> select DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT','HCC_MACLEAN',rowid) from HCC_MACLEAN where owner='MACLEAN'; DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT','HCC_MACLEAN',ROWID) ------------------------------------------------------------------ 1 1 1 1 1 1 1 1 8 rows selected. ??????????????COMPRESSION_TYPE?COMP_FOR_QUERY_HIGH???COMP_NOCOMPRESS,????????compress for query high????????????????? ?11g????????????????????HCC??????????? ALTER TABLE MOVE???????????????????HCC??? SQL> ALTER TABLE hcc_MACLEAN move COMPRESS FOR ARCHIVE HIGH; Table altered. SQL> select DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT','HCC_MACLEAN',rowid) from HCC_MACLEAN where owner='MACLEAN'; DBMS_COMPRESSION.GET_COMPRESSION_TYPE('SCOTT','HCC_MACLEAN',ROWID) ------------------------------------------------------------------ 16 16 16 16 16 16 16 16 8 rows selected.

    Read the article

  • CIFS Mounting Permissions

    - by malco
    I have an issue that I;m going round in circles with, I hope you can help. The Set up: Server 1 (CIFS Client) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad Server 2 (CIFS Server) - CentOS 6.3 AD integrated uing Samba/Winbind & idmap_ad All users (apart from root) are AD authenticated and this, including groups, etc works happily. What's working: I have created a share on Server 2: [share2] path = /srv/samba/share2 writeable = yes Permissions on the share: drwxrwx---. 2 root domain users 4096 Oct 12 09:21 share2 I can log into a Windows machine as user5 (member of domain users) and everything works as it should, for example: If I create a file it shows the correct permissions and attributes on both the MS and the Linux sides. Where I Fall Down: I mount the share on Server 1 using: # mount //server2/share2 /mnt/share2/ -o username=cifsmount,password=blah,domain=blah Or using fstab: //server2/share2 /mnt/share2 cifs credentials=/blah/.creds 0 0 This mounts fine, but.... If I log su, or log onto server 1 as a normal user (say user5) and try to create a file I get: #touch test touch test touch: cannot touch `test': Permission denied Then if I check the folder the file was created but as the cifsmount user: -rw-r--r--. 1 cifsmount domain users 0 Oct 12 09:21 test I can rename, delete, move or copy stuff around as user5, I just can't create anything, what am I doing wrong? I'm guessing it's something to do with the mount action as when I log onto server2 as user5 and access the folder locally it all works as it should. Can anyone point me in the right direction?

    Read the article

  • Dual-head monitor system Kubuntu 10.04

    - by andrii
    I have a notebook Asus V6X00V with 1400*1050 monitor(name: LVDS) and Dell Monitor 1920*1080 (VGA-0). I want to have a dual monitor system. At MS Windows everything is working fine. During the Kubuntu installation the Dell and the main notebook monitors have a right resolutions(1920*1080 & 1400*1050). But after some stage it have been changed to the 1152*864 for both. Now the right resolution is only during turning off process and when I am using the console. So it shows that system can use this resolutions. The problem is just in a settings. I am using Size & Orientation - System Settings for setting adjustment. Any option that changes resolution for any monitor or changing position(Absolute, Left Of, Right of and so on) cause the color line noise on the screens. I have tried xrandr: xrandr --output LVDS --mode 1400x1050 --pos 0x0 --output VGA-0 --mode 1920x1080 --right-of LVDS --pos 1400x0 but have received the same result. I have find out that for example the previous version of Randr(1.2, now I have xrandr 1.3) need a xorg.conf file modification to create a big virtual screen, but kubuntu 10.4 don't have xorg.conf and I don't know should I modify xorg for 1.3 version of xrandr or not. Please help me to solve this problem

    Read the article

  • Storing non-content data in Orchard

    - by Bertrand Le Roy
    A CMS like Orchard is, by definition, designed to store content. What differentiates content from other kinds of data is rather subtle. The way I would describe it is by saying that if you would put each instance of a kind of data on its own web page, if it would make sense to add comments to it, or tags, or ratings, then it is content and you can store it in Orchard using all the convenient composition options that it offers. Otherwise, it probably isn't and you can store it using somewhat simpler means that I will now describe. In one of the modules I wrote, Vandelay.ThemePicker, there is some configuration data for the module. That data is not content by the definition I gave above. Let's look at how this data is stored and queried. The configuration data in question is a set of records, each of which has a number of properties: public class SettingsRecord { public virtual int Id { get; set;} public virtual string RuleType { get; set; } public virtual string Name { get; set; } public virtual string Criterion { get; set; } public virtual string Theme { get; set; } public virtual int Priority { get; set; } public virtual string Zone { get; set; } public virtual string Position { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Each property has to be virtual for nHibernate to handle it (it creates derived classed that are instrumented in all kinds of ways). We also have an Id property. The way these records will be stored in the database is described from a migration: public int Create() { SchemaBuilder.CreateTable("SettingsRecord", table => table .Column<int>("Id", column => column.PrimaryKey().Identity()) .Column<string>("RuleType", column => column.NotNull().WithDefault("")) .Column<string>("Name", column => column.NotNull().WithDefault("")) .Column<string>("Criterion", column => column.NotNull().WithDefault("")) .Column<string>("Theme", column => column.NotNull().WithDefault("")) .Column<int>("Priority", column => column.NotNull().WithDefault(10)) .Column<string>("Zone", column => column.NotNull().WithDefault("")) .Column<string>("Position", column => column.NotNull().WithDefault("")) ); return 1; } When we enable the feature, the migration will run, which will create the table in the database. Once we've done that, all we have to do in order to use the data is inject an IRepository<SettingsRecord>, which is what I'm doing from the set of helpers I put under the SettingsService class: private readonly IRepository<SettingsRecord> _repository; private readonly ISignals _signals; private readonly ICacheManager _cacheManager; public SettingsService( IRepository<SettingsRecord> repository, ISignals signals, ICacheManager cacheManager) { _repository = repository; _signals = signals; _cacheManager = cacheManager; } The repository has a Table property, which implements IQueryable<SettingsRecord> (enabling all kind of Linq queries) as well as methods such as Delete and Create. Here's for example how I'm getting all the records in the table: _repository.Table.ToList() And here's how I'm deleting a record: _repository.Delete(_repository.Get(r => r.Id == id)); And here's how I'm creating one: _repository.Create(new SettingsRecord { Name = name, RuleType = ruleType, Criterion = criterion, Theme = theme, Priority = priority, Zone = zone, Position = position }); In summary, you create a record class, a migration, and you're in business and can just manipulate the data through the repository that the framework is exposing. You even get ambient transactions from the work context.

    Read the article

  • Troubleshooting an overheating CPU

    - by Jeff Fry
    I & my father just recently put together a new PC. Specs below. From the very beginning, on boot it will often complain that the CPU is too hot. If I sit in BIOS and watch the CPU, it'll drop back down from red to blue (<72C), at which point I've tended to just boot into Windows...and haven't had any problems. In fact, I've played a couple hours straight of Skyrim at max settings, and not had any visible issues. That said, I've occasionally walked away & come back to find that it's crashed. Yesterday, it crashed (while idle) twice in 12 hours, which shifted the balance from busy-with-life to nervous-I'm-about-to-melt-something. I just installed Core Temp which is showing my 4 cores fluxuating between 70-98C. I'm guessing at this point that the CPU fan may be incorrectly installed or defective. My first thought is to either (a) add water cooling (which the case supports) and / or (b) replace the CPU fan with an after-market one. That said, I'm very open to suggestions. A note, while I certainly don't want to burn money here, I have a baby coming any day now and am still unpacking from a recent move so if I have a choice between an option that costs money and another that takes a while...I'll happily spend a bit extra. Side question: Should I be nervous to even have this on at this point? Let me know if there's something useful I could add to my report. Otherwise, I'm looking forward to your suggestions! Thanks. CPU Intel i7-2600 CPU w/ stock fan Other HW ASUS P8Z68-V Pro motherboard 64G SSD boot drive 4 older SATA HDs GIGABYTE ATI Radeon HD6950 1 GB DDR5 8G Kingston T1 Series RAM Corsair 650W Gold Certified power supply Antec P280 case

    Read the article

  • Update in grub.cfg file for loading new kernel image in Ubuntu 11.04

    - by user1627657
    I am new to Linux. I am compiling Linux kernel (ver: 2.6.34.12) in gcc in traditional manner in VMware machine in Ubuntu (kernel version - 2.6.38-8-generic) 11.04 version. I am unable to find, where to update about newly compiled kernel in the grub.cfg file. I updated the newly created image version name in the existing image. Then VMware didn't able to load new kernel. I have searched in internet but I didn't find. So anyone can help me, to update in the grub.cfg and to successfully load new kernel. Few things about what I have done: Make bzImage to create image file. Make modules_install && make install to install modules and then sudo mkinitramfs -o initramfs.img-2.6.34 2.6.34. Then sudo gedit grub.cfg. In that at the mementry I updated the version of vmlinuz and initrd from 2.6.38-8 to 2.6.34.12. This is I have done.

    Read the article

  • How to route broadcast packets from machine with two network interfaces on same subnet

    - by Syam
    I run RHEL 5 and have two NICs on one machine connected to the same subnet: eth0 192.168.100.10 eth1 192.168.100.11 My application needs to receive and transmit UDP packets (both unicast & broadcast) via these interfaces. I've found the way to handle the ARP problem and I've added routes to handle the routing problem: ip rule add from 192.168.100.10 lookup 10 ip route add table 10 default src 192.168.100.10 dev eth0 (and similarly, table 11 for eth1) The problem is that only unicast packets gets routed properly. Broadcast packets always go out through eth0. I tried removing the rule for 192.168.100.0 & 192.168.100.255 from table 255 and adding them to my tables. But then I see ARP requests being given out for packets to 192.168.100.255 (obviously, no nodes respond and nobody gets any data). Due to several techno-political issues, I'm stuck with this configuration and can't change subnets or try something different. I've tried SO_BINDTODEVICE and it works, but I'd prefer a solution that doesn't need my application to run as root. Is there a way to get this working? Any help is highly appreciated.

    Read the article

  • Configuring MySQL for Power Failure

    - by Farrukh Arshad
    I have absolutely no experience with databases and MySql. Now the problem is I have an embedded device running a MySQL database with a web based application. The problem is when I shutdown my embedded device it just cut off the power, and I can not have a controlled shutdown. Given this situation how can I configure MySql to prevent it from failures and in case of a failure, I should have maximum support to recover my database. While searching this, I came across InnoDB Engine as well as some configuration options to set like sync_binlog=1 & innodb_flush_log_at_trx_commit=1. I have noticed my default Engine is InnoDB and binary logs are also enabled. What are other configurations to make for best possible failure & recovery support. Updated: I will be using InnoDB engine which supports Transactions. My question is how best I can configure it (InnoDB + MySQL) so that it can provide best possible fail-safe as well as crash recovery mechanism. One configuration option I came across is to enable binary logging which InnoDB uses at the time of recovery. Regards, Farrukh Arshad

    Read the article

  • KVM Hosting: How to efficiently replicate guests

    - by javano
    I have three KVM servers each with 1 guest VM, running directly on it's local storage, (so they are essentially getting a dedicated box worth of computing power each). In the event of a host failure I would like the guests replicated to at least one of the other hosts so I can spin it up there, until the failing host is fixed. I am curious about KVM cloning. I can clone a VM live or when it's suspended/shutdown. Obivously suspended VMs will naturally be quicker to clone but these three VMs comprise three parts of a single solution, so I don't want to ever have any one of them shutdown. How can I efficiently clone these VMs between servers? I have had a couple of ideas, but are these insane or, is there a better method I have missed for my scenario? Set up a DRDB partition between box 1 and 2 where VM 1 runs from, and so is replicated between box1 and box 2, repeat between box 2 & 3, and box 3 & 1 (This could be insane, I have never used DRDB only read about it) Just use standard KVM CLI clone options to perform live clones (I'm dubious about this because I don't know how long it will take and what the performance impact will be during) Run a copy of each VM on at least one other host, and have the guest on one host export it's data to the matching guest on another host where it can import that data, scripting this on the guest) Some of other way? Ideas welcome! Side Note These servers have 4x15k SAS drives in a RAID 10 so they aren't rocketing fast, and as I mentioned, each VM runs from the host's local storage, no NAS or SAN etc. So that is why I am asking this question about guest replication. Also, this isn't about disaster recovery. Guests will be exporting their data to a NAS over a VPN, so I am looking at how I can have them quickly spun up in a host failure situation.

    Read the article

  • logrotate: neither rotate nor compress empty files

    - by Andrew Tobey
    i have just set up an (r)syslog server to receive the logs of various clients, which works fine. only logrotate is still not behaving as intending. i want logrotate to create a new logfile for each day, but only to keep and store i.e. compress non-empty files. my logrotate config looks currently like this # sample configuration for logrotate being a remote server for multiple clients /var/log/syslog { rotate 3 daily missingok notifempty delaycompress compress dateext nomail postrotate reload rsyslog >/dev/null 2>&1 || true endscript } # local i.e. the system's very own logs: keep logs for a whole month /var/log/kern.log /var/log/kernel-info /var/log/auth.log /var/log/auth-info /var/log/cron.log /var/log/cron-info /var/log/daemon.log /var/log/daemon-info /var/log/mail.log /var/log/rsyslog /var/log/rsyslog-info { rotate 31 daily missingok notifempty delaycompress compress dateext nomail sharedscripts postrotate reload rsyslog >/dev/null 2>&1 || true endscript } # received i.e. logs from the clients /var/log/path-to-logs/*/* { rotate 31 daily missingok notifempty delaycompress compress dateext nomail } what i end up with is having is some sort of "summarized" files such as filename-datestampDay-Day and corresponding .gz files. What I do have are empty files, which are eventually zipped. so does the notifempty directive is in fact responsible for these DayX-DayY files, days on which really nothing happened? what would be an efficient way to drop both, empty log files and their .gz files, so that I eventually only keep logs/compressed files that truly contain data?

    Read the article

  • Why is cron mailing me program output even though I've redirected to /dev/null?

    - by Server Fault
    I'm trying to restart a system process through cron and getting emailed the startup output of the process. I thought redirecting STDOUT and SDTERR to /dev/null would "silence" the output but alas, this has not work. How can I get cron to silently restart this service? crontab entry: 0 6 * * * service sympa stop &>/dev/null; service sympa start &> /dev/null sample output from restart email: Stopping Sympa bounce manager bounced ...done. * Stopping Sympa task manager task_manager ...done. * Stopping Sympa mailing list archive manager archived ...done. * Stopping Sympa mailing list manager sympa ...done. ... waiting Prototype mismatch: sub Lock::LOCK_SH () vs none at /home/sympa/bin/Lock.pm line 38. Constant subroutine LOCK_SH redefined at /home/sympa/bin/Lock.pm line 38. Prototype mismatch: sub Lock::LOCK_EX () vs none at /home/sympa/bin/Lock.pm line 39. Constant subroutine LOCK_EX redefined at /home/sympa/bin/Lock.pm line 39. Prototype mismatch: sub Lock::LOCK_NB () vs none at /home/sympa/bin/Lock.pm line 40. Constant subroutine LOCK_NB redefined at /home/sympa/bin/Lock.pm line 40.

    Read the article

  • How do I uninstall a ruby version installed via source?

    - by Aaron McIver
    I installed a version (1.9.3-p194) of ruby via source using make install and realized this may have been the wrong route to take. Upon doing this, I realized this was a mistake and I should be using a solution such as rvm to address my ruby versions within the OS. I looked to see if an uninstall existed to be ran in conjunction with make and it didn't. I then proceeded to install rvm and add the aforementioned version in to my list of managed rubies within rvm which is not listed as ext-ruby-1.9.3-p194. rvm rubies ext-ruby-1.9.3-p194 [ x86_64 ] =* ruby-1.9.3-p194 [ x86_64 ] # => - current # =* - current && default # * - default** When I perform an rvm remove, it simply removes it from the rubies list however it still exists within /usr/local/bin. I am not concerned with the system install ruby version residing in /usr/bin as I understand that is tied to the OS and should simply be ignored. How can I safely uninstall/remove the aforementioned version and all the places in which it was installed, short of looking at the install script?

    Read the article

  • Can varnish cache files without specific extension or residing in specific directory

    - by pataroulis
    I have a varnish installation to cache (MANY) images that my service serves. It is about 200 images of around 4k per second and varnish happily serves them according to the following rule: if (req.request == "GET" && req.url ~ "\.(css|gif|jpg|jpeg|bmp|png|ico|img|tga|wmf)$") { remove req.http.cookie; return(lookup); } Now, the thing is that I recently added another service on the same server that creates thumbnails to serve but it does not add a specific extension. The files are of the following filename pattern: http://www.example.com/thumbnails/date-of-thumbnail/xxxxxxxxx.xx where xx are numbers, so xxxxxxxxx.xx could be 6482364283.73 (two numbers at the end) (actually this is the timestamp so I can keep extra info in the filename) That has the side effect that varnish does not cache them and I see them constantly being served by apache itself. Even though I can change the format from now on to create thumbs ending in .jpg, is there a way to change the vcl file of my varnish daemon to either cache everything under a directory (the thumbnails directory) or everything with two numbers at its extension? Let me know if I can provide any additional info ! Thanks!

    Read the article

  • Debian Wheezy IPv6 isn't configured with ifup post-up hook

    - by aef
    We recently set up a server on Debian Wheezy Beta 3 (x86_64) which has a native IPv6 connection. We configured the eth0 interface to get the IPv6 configuration through some post-up hook commands in /etc/network/interfaces. The result is, that after the booting the system up, there is only IPv4 and an auto-configured link-local IPv6 address configured on the interface, as if the command has never been executed. When we additionally place the commands after the call to ifup -a inside the /etc/init.d/networking init script, everything works as expected and we have a fully configured interface after booting up. This is quite an ugly way to configure the interface. What are we doing wrong with the ifup post-up hooks? Or is this a bug? The section from /etc/network/interfaces looks like this (IP-addresses changed): allow-hotplug eth0 iface eth0 inet static address 1.2.3.1 netmask 255.255.255.192 network 1.2.3.0 broadcast 1.2.3.63 gateway 1.2.3.62 dns-nameservers 8.8.8.8 dns-search mydomain.tld post-up ip -6 addr add 2001:db8:100:3022::2 dev eth0 post-up ip -6 route add fe80::1 dev eth0 post-up ip -6 route add default via fe80::1 dev eth0 I also tried it in this alternative way: auto eth0 iface eth0 inet static address 1.2.3.1 netmask 255.255.255.192 network 1.2.3.0 broadcast 1.2.3.63 gateway 1.2.3.62 dns-nameservers 8.8.8.8 dns-search mydomain.tld iface eth0 inet6 static address 2001:db8:100:3022::2 netmask 64 gateway fe80::1 What we added to /etc/init.d/networking: … case "$1" in start) process_options check_ifstate if [ "$CONFIGURE_INTERFACES" = no ] then log_action_msg "Not configuring network interfaces, see /etc/default/networking" exit 0 fi set -f exclusions=$(process_exclusions) log_action_begin_msg "Configuring network interfaces" if ifup -a $exclusions $verbose && ifup_hotplug $exclusions $verbose # Our additions ip -6 addr add 2001:db8:100:3022::2 dev eth0 ip -6 route add fe80::1 dev eth0 ip -6 route add default via fe80::1 dev eth0 then log_action_end_msg $? else log_action_end_msg $? fi ;; …

    Read the article

  • External HDD incorrectly detected as internal - how change to enable hot swap/eject?

    - by Sam
    I have win 7 x64 Home Prem. The HDD is a seagate barracuda, 7200.7 ST3120827AS. 3.5", Serial: 3ms006n6, Firmware: 3.42 (no further updates) NexStar CX External case (drivers installed). I have three drives: WD320 with OS installed WD750 data storage (internal) seagate 120 (external) - connected via esata board connected to sata on motherboard (MSI p43 neo) Tried uninstalling HDD in device manager to no effect. Also the internal WD750 is detected as an external drive and win taskbar icon allows for it to be ejected (unlike the seagate). All drives are configured - Online, Simple, Basic, NTFS, Active, Primary Partition (except c drive). The seagate was previously used as a primary disk with XP operating system so I deleted the volume and created/reformatted (not quick). HDD is no longer "Active". But did not fix problem. Background Originally, I installed win 7 with the bios set to IDE and forgot to install the chipset drivers. Then I changed win 7 to install the AHCI drivers, changed the bios to AHCI and rebooted. Win 7 loaded drivers but WD HDD gave problems/crashed. I installed chipset drivers and latest intell storage matrix software thingie (in safe mode). Everything worked fine after that except for the problem of not corrrectly detecting the external drive] I have noticed that under the driver properties (and similarly in the registry) the two drives are configured differently (e.g. in driver details property capabilities for the WD the value is set to 0000006, CM_DEVCAP_REMOVABLE & EJECTSUPPORTED - whereas the seagate shows 0000080 & CM_DEVCAP_SURPRISEREMOVALOK). Any easy way to configure things? I tried physically swapping the sata connections on the mainboard without success So far I have found that a solution to my problem might be to perform some reg changes: How do I remove the option to eject SATA drives from the Windows 7 tray icon?

    Read the article

  • Apache LDAP with local groups

    - by Greg Ogle
    I have a server that currently uses htpasswd to authenticate users. I'm migrating to using LDAP, but my LDAP server is only for user authentication, not allowing me to add groups. I still need to use groups as they are used for access control via the Apache Directory tags in my configuration. The alternative is to revisit the access control altogether, using php or something of the sort to limit access. this works for 'basic' authentication <Directory /misc/www/html/site> #LDAP & other config stuff irrelevant to issue Require ldap-group cn=<service>,ou=Groups,dc=<service>,dc=<org>,dc=com </Directory> attempted <Directory /misc/www/html/site> #LDAP & other config stuff irrelevant to issue #groups file from previous configuration using htpasswd #tried to tweak to match new user format, but I don't think it looks up in here AuthGroupFile /misc/www/htpasswd/groups #added the group, which is how it works when using htpasswd Require ldap-group cn=<service>,ou=Groups,dc=<service>,dc=<org>,dc=com group xyz </Directory>

    Read the article

< Previous Page | 529 530 531 532 533 534 535 536 537 538 539 540  | Next Page >