Search Results

Search found 16665 results on 667 pages for 'nhibernate configuration'.

Page 74/667 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • FluentNHibernate Overrides: UseOverridesFromAssemblyOf non-generic version

    - by ThiagoAlves
    Hi, I have a repository class that inherits from a generic implementation: public namespace RepositoryImplementation { public class PersonRepository : Web.Generics.GenericNHibernateRepository<Person> } The generic repository implementation uses Fluent NHibernate conventions. They're working fine. One of those conventions is that all properties are not nullable. Now I need to define that specific properties may be nullable outside the conventions. Fluent NHibernate has an interesting override mechanism: public namespace RepositoryImplementation { public class PersonMappingOverride : IAutoMappingOverride<Person> { public void Override(FluentNHibernate.Automapping.AutoMapping<Funcionario> mapping) { mapping.Map(x => x.PhoneNumber).Nullable(); } } } Now I need to register the override class into Fluent NHibernate. I have the following code in the Web.Generics.GenericNHibernateRepository generic class: AutoMap.AssemblyOf<Person>() .Where(type => type.Namespace == "Entities") .UseOverridesFromAssemblyOf<PersonMappingOverride>(); The problem is: UseOverridesFromAssemblyOf is a generic method, and I can't do something like that: .UseOverridesFromAssemblyOf<PersonMappingOverride>(); Because that would cause a circular reference. I don't want the generic repository to know the either repository or the mapping override class, because they vary from project to project. I see another solution: in the GenericNHibernateRepository class I can do this.GetType() and get the repository implementation type (e.g.: PersonRepository). However I can't call UseOverridesFromAssemblyOf() passing a type. Is there another way to configure overrides in FluentNHibernate? If not, how could I call UseOverridesFromAssemblyOf<T> without making the generic repository depend upon the repository implementation or the mapping override class? (Source: http://wiki.fluentnhibernate.org/Auto_mapping#Overrides)

    Read the article

  • SQLite assembly not copied to output folder for unit testing

    - by Groo
    Problem: SQLite assembly referenced in my DAL assembly does not get copied to the output folder when doing unit tests (Copy local is set to true). I am working on a .Net 3.5 app in VS2008, with NHibernate & SQLite in my DAL. Data access is exposed through the IRepository interface (repository factory) to other layers, so there is no need to reference NHibernate or the System.Data.SQLite assemblies in other layers. For unit testing, there is a public factory method (also in my DAL) which creates an in-memory SQLite session and creates a new IRepository implementation. This is also done to avoid have a shared SQLite in-memory config for all assemblies which need it, and to avoid referencing those DAL internal assemblies. The problem is when I run unit tests which reside a separate project - if I don't add System.Data.SQLite as a reference to the unit test project, it doesn't get copied to the TestResults...\Out folder (although this project references my DAL project, which references System.Data.SQLite, which has its Copy local property set to true), so the tests fail while NHibernate is being configured. If I add the reference to my testing project, then it does get copied and unit tests work. What am I doing wrong?

    Read the article

  • AssertionFailure: "null identifier" - FluentNH + SQLServerCE

    - by Stefan
    The code fails at session.Save(employee); with AssertionFailure "null identifier". What am I doing wrong? using FluentNHibernate.Cfg; using FluentNHibernate.Cfg.Db; using FluentNHibernate.Mapping; using NHibernate; using NHibernate.Cfg; using NHibernate.Tool.hbm2ddl; namespace FNHTest { public class Employee { public virtual int Id { get; private set; } public virtual string Name { get; set; } public virtual string Surname { get; set; } } public class EmployeeMap : ClassMap { public EmployeeMap() { Id(e = e.Id); Map(e = e.Name); Map(e = e.Surname); } } public class DB { private static ISessionFactory mySessionFactory = null; private static ISessionFactory SessionFactory { get { if (mySessionFactory == null) { mySessionFactory = Fluently.Configure() .Database(MsSqlCeConfiguration.Standard .ConnectionString("Data Source=MyDB.sdf")) .Mappings(m = m.FluentMappings.AddFromAssemblyOf()) .ExposeConfiguration(BuildSchema) .BuildSessionFactory(); } return mySessionFactory; } } private static void BuildSchema(Configuration configuration) { SchemaExport schemaExport = new SchemaExport(configuration); schemaExport.Execute(false, true, false); } public static ISession OpenSession() { return SessionFactory.OpenSession(); } } public class Program { public static void Main(string[] args) { var employee = new Employee { Name = "John", Surname = "Smith" }; using (ISession session = DB.OpenSession()) { session.Save(employee); } } } }

    Read the article

  • Are there good reasons not to use an ORM?

    - by hangy
    During my apprenticeship, I have used NHibernate for some smaller projects which I mostly coded and designed on my own. Now, before starting some bigger project, the discussion arose how to design data access and whether or not to use an ORM layer. As I am still in my apprenticeship and still consider myself a beginner in enterprise programming, I did not really try to push in my opinion, which is that using an object relational mapper to the database can ease development quite a lot. The other coders in the development team are much more experienced than me, so I think I will just do what they say. :-) However, I do not completely understand two of the main reasons for not using NHibernate or a similar project: One can just build one’s own data access objects with SQL queries and copy those queries out of Microsoft SQL Server Management Studio. Debugging an ORM can be hard. So, of course I could just build my data access layer with a lot of SELECTs etc, but here I miss the advantage of automatic joins, lazy-loading proxy classes and a lower maintenance effort if a table gets a new column or a column gets renamed. (Updating numerous SELECT, INSERT and UPDATE queries vs. updating the mapping config and possibly refactoring the business classes and DTOs.) Also, using NHibernate you can run into unforeseen problems if you do not know the framework very well. That could be, for example, trusting the Table.hbm.xml where you set a string’s length to be automatically validated. However, I can also imagine similar bugs in a “simple” SqlConnection query based data access layer. Finally, are those arguments mentioned above really a good reason not to utilise an ORM for a non-trivial database based enterprise application? Are there probably other arguments they/I might have missed? (I should probably add that I think this is like the first “big” .NET/C# based application which will require teamwork. Good practices, which are seen as pretty normal on Stack Overflow, such as unit testing or continuous integration, are non-existing here up to now.)

    Read the article

  • What would be the optimal disk config for SQL Server 2008 R2?

    - by Kev
    We have a new Dell R710 server that came with the following storage configuration: 8 x 146GB SAS 10k 6Gbps disks 1 x Perc H700 Integrated Controller (2 x 4 disks - 2 ports each supporting 4 disks) What would be the optimal configuration if we were just after performance? What would be the optimal configuration if we were after performance but wanted data resilience. As per 2 above but with a hot standby disk? We plan to run Windows 2008 R2 and SQL Server 2008 R2. Maximising storage capacity isn't a prime concern.

    Read the article

  • dead man's switch for remote networking interventions

    - by ascobol
    Hi, As I'm going to change the network configuration of a remote server, I was thinking of some security mechanisms to protect me from accidentally loosing control on the server. The level-0 protection I'm using is a scheduled system reboot: # at now+x minutes > reboot > ctrl+D where x is the delay before reboot. While this works relatevly well for very simple tasks like playing with iptables this method has at least two drawbacks: It's not very reactive, ie a connectivity problem should be detected automatically if for example an automatic remote ssh command fails does not work anymore for x seconds. It can obviously not work if one need to modify some configuration files and then reboot to test the changes. Are you guys using some tool for the second point ? I would love to have something able to revert the system configuration in a previously known stable state if I can't join the server X minutes after reboot. Thanks!

    Read the article

  • IIS7 FastCGI downloads quit at 4128760 bytes on slow connections

    - by eingko
    I'm using FastCGI via IIS7 to host a PHP application. For whatever reason downloads that are streamed via PHP (i.e. a script that outputs a file/bytes as a response) download perfectly fine on high-speed connections but on anything slower (even on DSL) quits at EXACTLY 4128760 bytes (~3.9MB) - which makes me think it's a configuration issue... We only started having this problem when we switched from Apache to IIS - this also points to a configuration problem, I think. But if it's a configuration issue, why would it only affect slower connections? Does anyone know where (or how) I could possibly change a setting like this? I've tried changing the idleTimeout, executionTimeout, and activityTimeout values in my web.config but this hasn't helped at all. Any help or direction would be greatly appreciated. Thanks in advance.

    Read the article

  • Is there a Linux kernel boot parameter to configure an IPv6 address?

    - by aef
    I know there is a parameter named ip which lets you configure IPv4 addresses on the Linux kernel through the boot loader. That looks like the following: ip=192.0.2.1::192.0.2.62:255.255.255.192::eth0:none I'm looking for an equal parameter for IPv6 configuration. I couldn't find anything about this in the kernel documentations. Update: Because of a lot of you asked why I would need this: The idea to use a kernel configuration came up related to this problem. I suspect the regular boot-up interface configuration is not done, because the interfaces are already up. The reason for this could be that I'm using a pre-boot environment with a Dropbear SSH server to allow me to unlock my encrypted root partition. The IP addresses for this environment are configured through GRUB with the ip= parameter. There is no DHCP or Router Advertisement available on that Ethernet segment and as this is the uplink segment provided by a large hosting company, there is no way to change that fact.

    Read the article

  • Apache web server: "proxying" a webapp from another server?

    - by Riddler
    Sorry for the lame terminology - I'm no way a sysadmin... So here's the deal. I have two Linux boxes in the same network, let's refer to those boxes by their IPs, a.b.c.d and e.f.g.h. Each box runs some webapp, normally available like http://a.b.c.d/ and http://e.f.g.h/. What I want to accomplish is this: with some Apache web server (which by the way lives on both boxes) configuration voodoo, the first app would be available via http://a.b.c.d/whatever1/, and the 2nd app would be available as http://a.b.c.d/whatever2/ - but would still reside on another server (e.f.g.h). Long story short - is it at all possible to do this with Apache configuration magic and without touching the webapps and their configuration? If so - how? :) Thanks in advance!

    Read the article

  • WPF Visual Studio Package gives error: Could not find endpoint element with name 'WCFname' and contr

    - by Andrei
    Hi everybody. This error has been covered before in other questions, however not for a Visual Studio package. Could not find endpoint element with name 'WCFname' and contract 'WCFcontract' in the ServiceModel client configuration section. I have a VS package project that needs to connect to a WCF service that provides some functionality. I add a reference to the WCF service and Visual Studio automatically creates the content for the configuration file. config file: <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.serviceModel> <bindings> <wsHttpBinding> <binding name="WSHttpBinding_IWCFSearchService" closeTimeout="00:01:00" openTimeout="00:01:00" receiveTimeout="00:10:00" sendTimeout="00:01:00" bypassProxyOnLocal="false" transactionFlow="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="524288" maxReceivedMessageSize="65536" messageEncoding="Text" textEncoding="utf-8" useDefaultWebProxy="true" allowCookies="false"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="16384" maxBytesPerRead="4096" maxNameTableCharCount="16384" /> <reliableSession ordered="true" inactivityTimeout="00:10:00" enabled="false" /> <security mode="Message"> <transport clientCredentialType="Windows" proxyCredentialType="None" realm="" /> <message clientCredentialType="Windows" negotiateServiceCredential="true" algorithmSuite="Default" /> </security> </binding> </wsHttpBinding> </bindings> <client> <endpoint address="http://localhost:8732/Design_Time_Addresses/WCFSearchServiceLibrary/Service1/" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IWCFSearchService" contract="WCFSearchServiceReference.IWCFSearchService" name="WSHttpBinding_IWCFSearchService"> <identity> <dns value="localhost" /> </identity> </endpoint> </client> </system.serviceModel> </configuration> However, when I run the application (in VS experimental mode) it doesn't seem to take the provided configuration file (app.config). Everytime it just throws this error: Could not find endpoint element with name 'WSHttpBinding_IWCFSearchService' and contract 'WCFSearchServiceReference.IWCFSearchService' in the ServiceModel client configuration section. This might be because no configuration file was found for your application, or because no endpoint element matching this name could be found in the client element. My guess is that it's taking the configuration file for Visual Studio (since it is running VS experimental mode). So yeah...why isn't it recognizing the app.config file and how could I make the application to recognize it? Any help would be very welcomed as I have already tried to fix this for some time. Thanks.

    Read the article

  • NHibernate, VS 2010

    - by ??????
    ????????????, ANRY! ?????? ??????? ??? ??????????? ???????? ?? ????????????, ?????????? ? NHibernate. ??? ?? ???????? ???? ?????? "Hello NHibernate!". ???????????? ??????????? ????? ??????? ????????: ?? ????, ???? ?????, ??????, ?????. ?????????????? ?????? 4 ??????? ? MSSQL 2010: ?????(id_??????, ????????, ????), ??????(id_???????, ???, ???????), ?????(id_??????, id_???????, ?????????) ? ?????? ??????(id_?????? ??????, id_??????, id_??????, ??????????). ??????????????, ?????? 4 ??????: ?????, ??????, ?????, ?????? ??????. ?????? ??? ?????: ????? ?? ????????? 4 mapping-?????, ??? ?? ????? ???????????? ?????? ? ????? ???? Debug ???????? ????????? ??????: "Could not compile the mapping document: Sklad.products.hbm.xml". ?????? "????????" ?????????, ??? ??????. ? ??? ????? ???? ???????? ? ??? ?? ????? ??????? ? ?????????, ??????. P.S. ???? ?? ??????, ?????? ???????? ?? ?????: [email protected] Google translation (I cleaned this up a but, don't don't speak russian, someone else please improve if it's wrong) Hello, ANRY! Most recently, during the passage of the practice of the university, faced with NHibernate. I read your article "Hello NHibernate!". Took to implement something like a store: that is, a product, the customer order. Accordingly created 4 tables in MSSQL 2010: Goods (id_tovara, name, price) Client (id_klienta, name, surname) Order (id_zakaza, id_klienta, cost) Order Line (id_stroki order id_zakaza, id_tovara, quantity) Accordingly, created 4 classes: Product, Customer, Order, Order Line. my question is this: whether you want to create 4 mapping-file, or you can make only one? And when there is a Debug gives the following error: "Could not compile the mapping document: Sklad.products.hbm.xml. And "Build" is normal, no errors. In what may be the problem and how it can solve? Regards, Andrew. PS if not difficult, you can reply to e-mail: [email protected]

    Read the article

  • Swapping from NHibernate to Entity Framework &ndash; Sanity Check

    - by DesigningCode
    Now I’m not an expert in either of these techs.  I have a nice framework for unit of work / repository built with NHibernate.  Works pretty well.  I use FluentNhibernate to do the mappings.  Works well.  Takes very little code to get going with a DB back OO model. So why swap? Linq.  In Entity Framework you get much better linq support.  Visibility. I have no idea what's really happening with NHibernate….its a cloud of mystery most of the time.  You have to read all the blogs, mailing lists, etc to know what's going on. So, EF 4.0 looks like pretty good….  it has reasonably good support for mapping POCOs.  Wrapping UnitOfWork and Repository around it seems ok. Only thing I haven’t liked too much is having to explicitly load lazy loading entities. So…. am I sane?  is EF the way to go?  or is NHibernate going to suddenly release the next generation of coolness?  Is there any other major gotchas of using EF over NHibernate?

    Read the article

  • TeamCity swap configuration files

    - by Edijs
    Hi! I have been using CC.NET for a while and decided to try Team City. The initial and default configuration is very easy, but how do I swap configuration files after code is checked out and before unit tests are run. I am using TFS, NUnit. 1. When working locally I have configuration file pointing to local server. 2. On the build server TeamCity get's notification that I have checked-in code and builds new version. 3. Server runs unit tests When on 3rd step server runs unit tests I need to swap configuration files that are pointing to other servers, not the ones I am using locally. How do you accomplish this task in TeamCity? Thanks, Edijs

    Read the article

  • SQL Server Management Studio Express 2005 has no Configuration Manager

    - by brohjoe
    Where is the configuration manager for SQL Express 2005? I need to configure SQL Server for TCP/IP but there is no configuration manager with the package. I see SQL Server Database Publishing Wizard, I see SQL Server Migration Assistant for Access, but no Configuration Manager. According to the MSDN, there should be one. I've even looked online for a download of the Configuration Manager for SQL Server 2005, but could not find one. Did I miss something in the download or should I just scrap SQL Server Express and download the full-blown SQL Server for Developers?

    Read the article

  • Eclipse RCP: Making use of configuration directory

    - by Dot
    Hello, My Eclipse RCP application requires a configuration file that contains some information to connect to a remote database. Where is the best location to store this configuration file? Can I use the default configuration directory (where 'config.ini' is usually stored) for this purpose? If so, how can I get a File instance to this location programmatically? I also note that this directory does not exist in my Eclipse IDE. Thanks.

    Read the article

  • Unity 2 and Enterprise library configuration tool

    - by nachid
    Hi, Can someone show me how to use Enterprise library configuration tool with Unity2 Whatever I did, when I open the Enterprise library configuration tool, I could not have it working with a Unity config file. When I click on the menu to add a new block, there is no Unity configuration block What am I doing wrong? Thank you for your help

    Read the article

  • Example for creating a configuration

    - by Steven
    hi, i am facing some difficulties in creating a session. Can anyone provide me an example or link where a configuration is created by using an external file by giving url,like Configuration config = new Configuration.configure(url); the cinfig file is in another hibernate project. is there anything that i should add to classpath or store the config file. My app is just getting hanged there at that statement.Help

    Read the article

  • Why Duplicate the “Release” configuration to "Disctribution"?

    - by Horace Ho
    On the Apple guide, there is a step before building the AppStore version: Open the Xcode project and Duplicate the “Release” configuration in the Configurations pane of the project's Info panel. Rename this new configuration “Distribution”. Why this step is needed? Can I skip this step and use the "Release" configuration to build the final version for AppStore?

    Read the article

  • HA Proxy Stick-table and tcp-connection configuration

    - by Vladimir
    I am using HA Proxy HA-Proxy version 1.4.18 2011/09/16 I am trying to insert the following into /etc/init.d/haproxy.cfg file # Use General Purpose Couter (gpc) 0 in SC1 as a global abuse counter # Monitors the number of request sent by an IP over a period of 10 seconds stick-table type ip size 1m expire 10s store gpc0,http_req_rate(10s) tcp-request connection track-sc1 src tcp-request connection reject if { src_get_gpc0 gt 0 } # Table definition stick-table type ip size 100k expire 30s store conn_cur(3s) # Allow clean known IPs to bypass the filter tcp-request connection accept if { src -f /etc/haproxy/whitelist.lst } # Shut the new connection as long as the client has already 10 opened tcp-request connection reject if { src_conn_cur ge 10 } tcp-request connection track-sc1 src I get the following error: [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:36] : stick-table: unknown argument 'store'. [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:37] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:38] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:41] : stick-table: unknown argument 'store'. [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:43] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:45] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : parsing [/etc/haproxy/haproxy.cfg:46] : unknown argument 'connection' after 'tcp-request' in proxy 'http_proxy' [ALERT] 256/113143 (4627) : Error(s) found in configuration file : /etc/haproxy/haproxy.cfg [WARNING] 256/113143 (4627) : Proxy 'http_proxy': in multi-process mode, stats will be limited to process assigned to the current request. [ALERT] 256/113143 (4627) : Fatal errors found in configuration. [fail] Could you please tell me what is wrong with the code? Thanks!

    Read the article

  • Remove Kernel Lock from Unmounted Mass Storage USB Device from the Command Line in Linux

    - by Casey
    I've searched high and low, and can't figure this one out. I have a older Olympus Camera (2001 or so). When I plug in the USB connection, I get the following log output: $ dmesg | grep sd [20047.625076] sd 21:0:0:0: Attached scsi generic sg7 type 0 [20047.627922] sd 21:0:0:0: [sdg] Attached SCSI removable disk Secondly, the drive is not mounted in the FS, but when I run gphoto2 I get the following error: $ gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') *** What command will unmount the drive. For example in Nautilus, I can right click and select "Safely Remove Device". After doing that, the /dev/sg7 and /dev/sdg devices are removed. The output of gphoto2 is then: # gphoto2 --list-config /Camera Configuration/Picture Settings/resolution /Camera Configuration/Picture Settings/shutter /Camera Configuration/Picture Settings/aperture /Camera Configuration/Picture Settings/color /Camera Configuration/Picture Settings/flash /Camera Configuration/Picture Settings/whitebalance /Camera Configuration/Picture Settings/focus-mode /Camera Configuration/Picture Settings/focus-pos /Camera Configuration/Picture Settings/exp /Camera Configuration/Picture Settings/exp-meter /Camera Configuration/Picture Settings/zoom /Camera Configuration/Picture Settings/dzoom /Camera Configuration/Picture Settings/iso /Camera Configuration/Camera Settings/date-time /Camera Configuration/Camera Settings/lcd-mode /Camera Configuration/Camera Settings/lcd-brightness /Camera Configuration/Camera Settings/lcd-auto-shutoff /Camera Configuration/Camera Settings/camera-power-save /Camera Configuration/Camera Settings/host-power-save /Camera Configuration/Camera Settings/timefmt Some things I've tried already are sdparm and sg3_utils, however I am unfamiliar with them, so it's possible I just didn't find the right command. Update 1: # mount | grep sdg # mount | grep sg7 # umount /dev/sg7 umount: /dev/sg7: not mounted # umount /dev/sdg umount: /dev/sdg: not mounted # gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') ***

    Read the article

  • Apache Simple Configuration Issue: per-user directory is accessing /~user instead of ~user

    - by Huckphin
    Hello. I am just getting Apache 2.2 running on Fedora 13 Beta 64-bit. I am running into issues setting my per-user directory. The goal is to make localhost/~user map to /home/~user/public_html. I think that I have the permissions right because I have 755 to /home/~user, and I have 755 to /home/~user/public_html/ and I have 777 for all contents inside of /home/~user/public_html/ recursively set. My mod_userdir configuration looks like this: <IfModule mod_userdir.c> # # UserDir is disabled by default since it can confirm the presence # of a username on the system (depending on home directory # permissions). # UserDir disabled root UserDir enabled huckphin # # To enable requests to /~user/ to serve the user's public_html # directory, remove the "UserDir disabled" line above, and uncomment # the following line instead: # UserDir public_html The error that I am seeing in the error log is this: [Sat May 15 09:54:29 2010] [error] [client 127.0.0.1] (13)Permission denied: access to /~huckphin/index.html denied When I login as the apache user, I know that /~huckphin does not exist, and this is not what I want. I want it to be accessing ~huckphin, not /~huckphin. What do I need to change on my configuration for this to work?

    Read the article

  • Remove Kernel Lock from Unmounted Mass Storage USB Device from the Command Line in Linux

    - by Casey
    I've searched high and low, and can't figure this one out. I have a older Olympus Camera (2001 or so). When I plug in the USB connection, I get the following log output: $ dmesg | grep sd [20047.625076] sd 21:0:0:0: Attached scsi generic sg7 type 0 [20047.627922] sd 21:0:0:0: [sdg] Attached SCSI removable disk Secondly, the drive is not mounted in the FS, but when I run gphoto2 I get the following error: $ gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') *** What command will unmount the drive. For example in Nautilus, I can right click and select "Safely Remove Device". After doing that, the /dev/sg7 and /dev/sdg devices are removed. The output of gphoto2 is then: # gphoto2 --list-config /Camera Configuration/Picture Settings/resolution /Camera Configuration/Picture Settings/shutter /Camera Configuration/Picture Settings/aperture /Camera Configuration/Picture Settings/color /Camera Configuration/Picture Settings/flash /Camera Configuration/Picture Settings/whitebalance /Camera Configuration/Picture Settings/focus-mode /Camera Configuration/Picture Settings/focus-pos /Camera Configuration/Picture Settings/exp /Camera Configuration/Picture Settings/exp-meter /Camera Configuration/Picture Settings/zoom /Camera Configuration/Picture Settings/dzoom /Camera Configuration/Picture Settings/iso /Camera Configuration/Camera Settings/date-time /Camera Configuration/Camera Settings/lcd-mode /Camera Configuration/Camera Settings/lcd-brightness /Camera Configuration/Camera Settings/lcd-auto-shutoff /Camera Configuration/Camera Settings/camera-power-save /Camera Configuration/Camera Settings/host-power-save /Camera Configuration/Camera Settings/timefmt Some things I've tried already are sdparm and sg3_utils, however I am unfamiliar with them, so it's possible I just didn't find the right command. Update 1: # mount | grep sdg # mount | grep sg7 # umount /dev/sg7 umount: /dev/sg7: not mounted # umount /dev/sdg umount: /dev/sdg: not mounted # gphoto2 --list-config *** Error *** An error occurred in the io-library ('Could not lock the device'): Camera is already in use. *** Error (-60: 'Could not lock the device') ***

    Read the article

  • Apache Caching and Expires configuration

    - by mcondiff
    I'm looking for a best possible caching/expires configuration for my specific situation. I realize that some sites have advocated turning etags off: Header unset ETag, FileETag None I know that I should use either Expires or Cache-Control. In additions, I know that I should use either Last-modified or ETAGs (Per ySlow docs). I inherited a clients server that uses the following in .htaccess: <FilesMatch "\.(ico|pdf|flv|jpg|jpeg|png|gif|js|css|swf|xml|txt|html|htm)$"> Header set Cache-Control "max-age=172800, public, must-revalidate" </FilesMatch> With this server I am not going to be able to rely on staff to rename images, css and js in web applications so I do not want to set the expires far in the future without knowing (with a good certainty) that "most/all" browsers will check to see if content has changed. What I do not want to happen is someone call me and say the website is broken because they replaced an image and it's not showing up. But I do want to take the most advantage I can with caching and expires while still maintaining that mostly all browsers will check with the server to see if components have changed. I have access to both the .htaccess and apache .conf file and it is a single server, the content is not deployed on multiple servers. What would be the best .htaccess or .conf configuration for me to achieve my goals for this clients server? Thanks for your help

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >