Search Results

Search found 9816 results on 393 pages for 'named conf'.

Page 32/393 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Does Ivy's url resolver support transitive retrieval?

    - by Sean
    For some reason I can't seem to resolve the dependencies of my dependencies when using a url resolver to specify a repository's location. However, when using the ibiblio resolver, I am able to retrieve them. For example: <!-- Ivy File --> <ivy-module version="1.0"> <info organisation="org.apache" module="chained-resolvers"/> <dependencies> <dependency org="commons-lang" name="commons-lang" rev="2.0" conf="default"/> <dependency org="checkstyle" name="checkstyle" rev="5.0"/> </dependencies> </ivy-module> <!-- ivysettings file --> <ivysettings> <settings defaultResolver="chained"/> <resolvers> <chain name="chained"> <url name="custom-repo"> <ivy pattern="http://my.internal.domain.name/ivy/[organisation]/[module]/[revision]/ivy-[revision].xml"/> <artifact pattern="http://my.internal.domain.name/ivy/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]"/> </url> <url name="ibiblio-mirror" m2compatible="true"> <artifact pattern="http://mirrors.ibiblio.org/pub/mirrors/maven2/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <ibiblio name="ibiblio" m2compatible="true"/> </chain> </resolvers> </ivysettings> <!-- checkstyle ivy.xml file generated from pom via ivy:install task --> <?xml version="1.0" encoding="UTF-8"?> <ivy-module version="1.0" xmlns:m="http://ant.apache.org/ivy/maven"> <info organisation="checkstyle" module="checkstyle" revision="5.0" status="release" publication="20090509202448" namespace="maven2" > <license name="GNU Lesser General Public License" url="http://www.gnu.org/licenses/lgpl.txt" /> <description homepage="http://checkstyle.sourceforge.net/"> Checkstyle is a development tool to help programmers write Java code that adheres to a coding standard </description> </info> <configurations> <conf name="default" visibility="public" description="runtime dependencies and master artifact can be used with this conf" extends="runtime,master"/> <conf name="master" visibility="public" description="contains only the artifact published by this module itself, with no transitive dependencies"/> <conf name="compile" visibility="public" description="this is the default scope, used if none is specified. Compile dependencies are available in all classpaths."/> <conf name="provided" visibility="public" description="this is much like compile, but indicates you expect the JDK or a container to provide it. It is only available on the compilation classpath, and is not transitive."/> <conf name="runtime" visibility="public" description="this scope indicates that the dependency is not required for compilation, but is for execution. It is in the runtime and test classpaths, but not the compile classpath." extends="compile"/> <conf name="test" visibility="private" description="this scope indicates that the dependency is not required for normal use of the application, and is only available for the test compilation and execution phases." extends="runtime"/> <conf name="system" visibility="public" description="this scope is similar to provided except that you have to provide the JAR which contains it explicitly. The artifact is always available and is not looked up in a repository."/> <conf name="sources" visibility="public" description="this configuration contains the source artifact of this module, if any."/> <conf name="javadoc" visibility="public" description="this configuration contains the javadoc artifact of this module, if any."/> <conf name="optional" visibility="public" description="contains all optional dependencies"/> </configurations> <publications> <artifact name="checkstyle" type="jar" ext="jar" conf="master"/> </publications> <dependencies> <dependency org="antlr" name="antlr" rev="2.7.6" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> <dependency org="apache" name="commons-beanutils-core" rev="1.7.0" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> <dependency org="apache" name="commons-cli" rev="1.0" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> <dependency org="apache" name="commons-logging" rev="1.0.3" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> <dependency org="com.google.collections" name="google-collections" rev="0.9" force="true" conf="compile->compile(*),master(*);runtime->runtime(*)"/> </dependencies> </ivy-module> Using the "ibiblio" resolver I have no problem resolving my project's two dependencies (commons-lang 2.0 and checkstyle 5.0) and checkstyle's dependencies. However, when attempting to exclusively use the "custom-repo" or "ibiblio-mirror" resolvers, I am able to resolve my project's two explicitly defined dependencies, but not checkstyle's dependencies. Is this possible? Any help would be greatly appreciated.

    Read the article

  • Problems with type and ext attribute of artifact

    - by user315228
    Hi, I have following definition in ivy.xml <dependency org="southbeach" name="ego" rev="4.3.1" conf="properties->asterik" > <artifact name="ego" type="conf" ext="conf" conf="properties->asterik"/> </dependency> I have files with either extension conf or properties which i need at runtime, in ivysettings.xml, i have following: <filesystem name="privateFSa"> <artifact pattern="${localRepositoryLocation}/[artifact].[ext]" /> </filesystem> It always tries to look for ego.jar instead of ego.conf. can please somebody shed light on this? am i doing something wrong or ivy just supports tar,zip,gz, jar and not properties or conf files? I did workaround for now in ivysettings.xml <filesystem name="privateFSa"> <artifact pattern="${localRepositoryLocation}/[artifact].conf" /> </filesystem> but this doesnt looks good to hardcode conf there. Thanks, Almas

    Read the article

  • Session memory – who’s this guy named Max and what’s he doing with my memory?

    - by extended_events
    SQL Server MVP Jonathan Kehayias (blog) emailed me a question last week when he noticed that the total memory used by the buffers for an event session was larger than the value he specified for the MAX_MEMORY option in the CREATE EVENT SESSION DDL. The answer here seems like an excellent subject for me to kick-off my new “401 – Internals” tag that identifies posts where I pull back the curtains a bit and let you peek into what’s going on inside the extended events engine. In a previous post (Option Trading: Getting the most out of the event session options) I explained that we use a set of buffers to store the event data before  we write the event data to asynchronous targets. The MAX_MEMORY along with the MEMORY_PARTITION_MODE defines how big each buffer will be. Theoretically, that means that I can predict the size of each buffer using the following formula: max memory / # of buffers = buffer size If it was that simple I wouldn’t be writing this post. I’ll take “boundary” for 64K Alex For a number of reasons that are beyond the scope of this blog, we create event buffers in 64K chunks. The result of this is that the buffer size indicated by the formula above is rounded up to the next 64K boundary and that is the size used to create the buffers. If you think visually, this means that the graph of your max_memory option compared to the actual buffer size that results will look like a set of stairs rather than a smooth line. You can see this behavior by looking at the output of dm_xe_sessions, specifically the fields related to the buffer sizes, over a range of different memory inputs: Note: This test was run on a 2 core machine using per_cpu partitioning which results in 5 buffers. (Seem my previous post referenced above for the math behind buffer count.) input_memory_kb total_regular_buffers regular_buffer_size total_buffer_size 637 5 130867 654335 638 5 130867 654335 639 5 130867 654335 640 5 196403 982015 641 5 196403 982015 642 5 196403 982015 This is just a segment of the results that shows one of the “jumps” between the buffer boundary at 639 KB and 640 KB. You can verify the size boundary by doing the math on the regular_buffer_size field, which is returned in bytes: 196403 – 130867 = 65536 bytes 65536 / 1024 = 64 KB The relationship between the input for max_memory and when the regular_buffer_size is going to jump from one 64K boundary to the next is going to change based on the number of buffers being created. The number of buffers is dependent on the partition mode you choose. If you choose any partition mode other than NONE, the number of buffers will depend on your hardware configuration. (Again, see the earlier post referenced above.) With the default partition mode of none, you always get three buffers, regardless of machine configuration, so I generated a “range table” for max_memory settings between 1 KB and 4096 KB as an example. start_memory_range_kb end_memory_range_kb total_regular_buffers regular_buffer_size total_buffer_size 1 191 NULL NULL NULL 192 383 3 130867 392601 384 575 3 196403 589209 576 767 3 261939 785817 768 959 3 327475 982425 960 1151 3 393011 1179033 1152 1343 3 458547 1375641 1344 1535 3 524083 1572249 1536 1727 3 589619 1768857 1728 1919 3 655155 1965465 1920 2111 3 720691 2162073 2112 2303 3 786227 2358681 2304 2495 3 851763 2555289 2496 2687 3 917299 2751897 2688 2879 3 982835 2948505 2880 3071 3 1048371 3145113 3072 3263 3 1113907 3341721 3264 3455 3 1179443 3538329 3456 3647 3 1244979 3734937 3648 3839 3 1310515 3931545 3840 4031 3 1376051 4128153 4032 4096 3 1441587 4324761 As you can see, there are 21 “steps” within this range and max_memory values below 192 KB fall below the 64K per buffer limit so they generate an error when you attempt to specify them. Max approximates True as memory approaches 64K The upshot of this is that the max_memory option does not imply a contract for the maximum memory that will be used for the session buffers (Those of you who read Take it to the Max (and beyond) know that max_memory is really only referring to the event session buffer memory.) but is more of an estimate of total buffer size to the nearest higher multiple of 64K times the number of buffers you have. The maximum delta between your initial max_memory setting and the true total buffer size occurs right after you break through a 64K boundary, for example if you set max_memory = 576 KB (see the green line in the table), your actual buffer size will be closer to 767 KB in a non-partitioned event session. You get “stepped up” for every 191 KB block of initial max_memory which isn’t likely to cause a problem for most machines. Things get more interesting when you consider a partitioned event session on a computer that has a large number of logical CPUs or NUMA nodes. Since each buffer gets “stepped up” when you break a boundary, the delta can get much larger because it’s multiplied by the number of buffers. For example, a machine with 64 logical CPUs will have 160 buffers using per_cpu partitioning or if you have 8 NUMA nodes configured on that machine you would have 24 buffers when using per_node. If you’ve just broken through a 64K boundary and get “stepped up” to the next buffer size you’ll end up with total buffer size approximately 10240 KB and 1536 KB respectively (64K * # of buffers) larger than max_memory value you might think you’re getting. Using per_cpu partitioning on large machine has the most impact because of the large number of buffers created. If the amount of memory being used by your system within these ranges is important to you then this is something worth paying attention to and considering when you configure your event sessions. The DMV dm_xe_sessions is the tool to use to identify the exact buffer size for your sessions. In addition to the regular buffers (read: event session buffers) you’ll also see the details for large buffers if you have configured MAX_EVENT_SIZE. The “buffer steps” for any given hardware configuration should be static within each partition mode so if you want to have a handy reference available when you configure your event sessions you can use the following code to generate a range table similar to the one above that is applicable for your specific machine and chosen partition mode. DECLARE @buf_size_output table (input_memory_kb bigint, total_regular_buffers bigint, regular_buffer_size bigint, total_buffer_size bigint) DECLARE @buf_size int, @part_mode varchar(8) SET @buf_size = 1 -- Set to the begining of your max_memory range (KB) SET @part_mode = 'per_cpu' -- Set to the partition mode for the table you want to generate WHILE @buf_size <= 4096 -- Set to the end of your max_memory range (KB) BEGIN     BEGIN TRY         IF EXISTS (SELECT * from sys.server_event_sessions WHERE name = 'buffer_size_test')             DROP EVENT SESSION buffer_size_test ON SERVER         DECLARE @session nvarchar(max)         SET @session = 'create event session buffer_size_test on server                         add event sql_statement_completed                         add target ring_buffer                         with (max_memory = ' + CAST(@buf_size as nvarchar(4)) + ' KB, memory_partition_mode = ' + @part_mode + ')'         EXEC sp_executesql @session         SET @session = 'alter event session buffer_size_test on server                         state = start'         EXEC sp_executesql @session         INSERT @buf_size_output (input_memory_kb, total_regular_buffers, regular_buffer_size, total_buffer_size)             SELECT @buf_size, total_regular_buffers, regular_buffer_size, total_buffer_size FROM sys.dm_xe_sessions WHERE name = 'buffer_size_test'     END TRY     BEGIN CATCH         INSERT @buf_size_output (input_memory_kb)             SELECT @buf_size     END CATCH     SET @buf_size = @buf_size + 1 END DROP EVENT SESSION buffer_size_test ON SERVER SELECT MIN(input_memory_kb) start_memory_range_kb, MAX(input_memory_kb) end_memory_range_kb, total_regular_buffers, regular_buffer_size, total_buffer_size from @buf_size_output group by total_regular_buffers, regular_buffer_size, total_buffer_size Thanks to Jonathan for an interesting question and a chance to explore some of the details of Extended Event internals. - Mike

    Read the article

  • B2B and B2C Commerce are alike… but a little different – Oracle Commerce named Leader in Forrester B2B Commerce Wave

    - by Katrina Gosek
    We weren’t surprised to see Oracle Commerce positioned as a Leader in Forrester’s first Commerce Wave focused on B2B, released earlier this month. The reports validates much of what we’ve heard from our largest customers – the world’s largest distribution, manufacturing and high-tech customers who sell billions of dollars of goods and services to other businesses through their Web channels. More importantly, the report confirms something very important: B2B and B2C Commerce are alike… but a little different. B2B and B2C Commerce are alike… Clearly, B2C experiences have set expectations for B2B. Every B2B buyer is a consumer at home and brings the same expectations to a website selling electronic components, aftermarket parts, or MRO products. Forrester calls these rich consumer-based capabilities that help B2B customers do their jobs “table stakes”: search & navigation, promotions, cross-channel commerce and mobile: “Whether they are just beginning to sell online or are in the late stages of launching a next-generation site, B2B eCommerce operations today must: offer a customer experience standard comparable to what leading b2c sites now offer; address the growing influence that mobile devices are having in the workplace; make a qualitative and quantitative business case that drives sustained investment.” Just five years ago, many of our B2B customers’ online business comprised only 5-10% of their total revenue. Today, when we speak to those same brands, we hear about double and triple digit growth in their online channels. Many have seen the percentage of the business they perform in their web channels cross the 30-50% threshold. You can hear first-hand from several Oracle Commerce B2B customers about the success they are seeing, and what they’re trying to accomplish (Carolina Biological, Premier Farnell, DeliXL, Elsevier). This momentum is likely the reason Forrester broke out the separate B2B Commerce Wave from the B2C Wave. In fact, B2B is becoming the larger force in commerce, expected to collect twice the online dollars of B2C this year ($559 billion). But a little different… Despite the similarities, there is a key and very important difference between B2C and B2B. Unlike a consumer shopping for shoes, a business shopper buying from a distributor or manufacturer is coming to the Web channel as a part of their job. So in addition to a rich, consumer-like experience this shopper expects, these B2B buyers need quoting tools and complex pricing capabilities, like eProcurement, bulk order entry, and other self-service tools such as account, contract and organization management.  Forrester also is emphasizing three additional “back-end” tools and capabilities their clients say they need to drive growth in their B2B online channels: i) product information management (PIM), which provides a single system of record for large part lists and product catalogs; ii) web content management (WCM), needed to manage large volumes of unstructured marketing information, and iii) order management systems (OMS), which manage and orchestrate the complex B2B order life cycle from quote through approval, submission to manufacturing, distribution and delivery.  We would like to expand on each of these 3 areas: As Forrester highlights, back-end PIM is definitely needed by B2B Commerce providers. Most B2B companies have made significant investments in enterprise-grade PIMs, given the importance of product data management for aggregation and syndication of content, product attribution, analytics, and handling of complex workflows. While in principle it may sound appealing to have a PIM as part of a commerce offering (especially for SMBs who have to do more with less), our customers have typically found that PIM in a commerce platform is largely redundant with what they already have in-place, and is not fully-featured or robust enough to handle the complexity of the product data sets that B2B distributors and manufacturers usually handle. To meet the PIM needs for commerce, Oracle offers enterprise PIM (Product Hub/Fusion PIM) and a robust enterprise data quality product (EDQP) integrated with the Oracle Commerce solution. These are key differentiators of our offering and these capabilities are becoming even more tightly integrated with Oracle Commerce over time. For Commerce, what customers really need is a robust product catalog and content management system for enabling business users to further enrich and ready catalog and content data to be presented and sold online.  This has been a significant area of investment in the Oracle Commerce platform , which continue to get stronger. We see this combination of capabilities as best meeting the needs of our customers for a commerce platform without adding a largely redundant, less functional PIM in the commerce front-end.   On the topic of web content management, we were pleased to see Forrester recognize Oracle’s unique functional capabilities in this area and the “unique opportunity in the market to lead the convergence of commerce and content management with the amalgamation of Oracle Commerce with WebCenter Sites (formally FatWire).” Strong content management capabilities are critical for distributors and manufacturers who are frequently serving an engineering audience coming to their websites to conduct product research in search of technical data sheets, drawings, videos and more. The convergence of content, commerce, and experience is critical for B2B brands selling online. Regarding order management, Forrester notes that many businesses use their existing back-end enterprise resource planning (ERP) systems to manage order life cycles.  We hear the same from most of our B2B customers, as they already have an ERP system—if not several of them—and are not interested in yet another one.  So what do we take away from the Wave results? Forrester notes that the Oracle Commerce Platform “has always had strong B2B commerce capabilities and Oracle has an exhaustive list of B2B customers using the solution.”  What makes us excited about developing leading B2B solutions are the close relationships with our customers and the clear opportunity in the market – which we’ll address in an exciting new release in the coming months. Oracle has one of the world’s largest B2B customer bases, providing leading solutions across key business-to-business functions – from marketing, sales automation, and service to master data management, and ERP.  To learn more about Oracle’s Commerce product vision and strategy, visit our website and check out these other B2B Commerce Resources: - 2013 B2B Commerce Trends Report - B2B Commerce Whitepaper: Consumerization, Complexity, Change - B2B Commerce Webcast: What Industry Trend Setters Do Right - Internet Retailer, Web Drives Sales for B2B Companies - Internet Retailer, The Web Means Business: B2B Companies Beef Up Their Websites, borrowing from b2c retailers and breaking new ground - Internet Retailer, B2B e-Commerce is poised for growth ----------THIS DOCUMENT IS FOR INFORMATIONAL PURPOSES ONLY AND MAY NOT BE INCORPORATED INTO A CONTRACT OR AGREEMENT 

    Read the article

  • B2B and B2C alike… but a little different – Oracle Commerce named Leader in Forrester B2B Commerce Wave

    - by Katrina Gosek
    We weren’t surprised to see Oracle Commerce positioned as a Leader in Forrester Research, Inc.’s first Commerce Wave focused on B2B, “The Forrester Wave™: B2B Commerce Suites, Q4 2013,” released earlier this month. We believe that the report validates much of what we’ve heard from our largest customers – the world’s largest distribution, manufacturing and high-tech customers who sell billions of dollars of goods and services to other businesses through their Web channels. More importantly, we feel that the report confirms something very important: B2B and B2C Commerce are alike… but a little different. B2B and B2C Commerce are alike… Clearly, B2C experiences have set expectations for B2B. Every B2B buyer is a consumer at home and brings the same expectations to a website selling electronic components, aftermarket parts, or MRO products. Forrester calls these rich consumer-based capabilities that help B2B customers do their jobs “table stakes”: front-office content, community, and commerce features that meet customer expectations for 24x7x365 ordering, real-time customer service, and expedited shipping — both online and on mobile devices: “Whether they are just beginning to sell online or are in the late stages of launching a next-generation site, B2B eCommerce operations today must: offer a customer experience standard comparable to what leading b2c sites now offer; address the growing influence that mobile devices are having in the workplace; make a qualitative and quantitative business case that drives sustained investment.” Just five years ago, many of our B2B customers’ online business comprised only 5-10% of their total revenue. Today, when we speak to those same brands, we hear about double and triple digit growth in their online channels. Many have seen the percentage of the business they perform in their web channels cross the 30-50% threshold. You can hear first-hand from several Oracle Commerce B2B customers about the success they are seeing, and what they’re trying to accomplish (Carolina Biological, Premier Farnell, DeliXL, Elsevier). It seems that this market momentum is likely the reason Forrester broke out the separate B2B Commerce Wave from the B2C Wave. In fact, B2B is becoming the larger force in commerce, expected to collect twice the online dollars of B2C this year ($559 billion). But a little different… Despite the similarities, there is a key and very important difference between B2C and B2B. Unlike a consumer shopping for shoes, a business shopper buying from a distributor or manufacturer is coming to the Web channel as a part of their job. So in addition to a rich, consumer-like experience this shopper expects, these B2B buyers need quoting tools and complex pricing capabilities, like eProcurement, bulk order entry, and other self-service tools such as account, contract and organization management. Forrester also is emphasizing three additional “back-end” tools and capabilities their clients say they need to drive growth in their B2B online channels: i) product information management (PIM), which provides a single system of record for large part lists and product catalogs; ii) web content management (WCM), needed to manage large volumes of unstructured marketing information, and iii) order management systems (OMS), which manage and orchestrate the complex B2B order life cycle from quote through approval, submission to manufacturing, distribution and delivery. We would like to expand on each of these 3 areas: As Forrester suggests, back-end PIM is definitely needed by B2B Commerce providers. Most B2B companies have made significant investments in enterprise-grade PIMs, given the importance of product data management for aggregation and syndication of content, product attribution, analytics, and handling of complex workflows. While in principle it may sound appealing to have a PIM as part of a commerce offering (especially for SMBs who have to do more with less), our customers have typically found that PIM in a commerce platform is largely redundant with what they already have in-place, and is not fully-featured or robust enough to handle the complexity of the product data sets that B2B distributors and manufacturers usually handle. To meet the PIM needs for commerce, Oracle offers enterprise PIM (Product Hub/Fusion PIM) and a robust enterprise data quality product (EDQP) integrated with the Oracle Commerce solution. These are key differentiators of our offering and these capabilities are becoming even more tightly integrated with Oracle Commerce over time. For Commerce, what customers really need is a robust product catalog and content management system for enabling business users to further enrich and ready catalog and content data to be presented and sold online.  This has been a significant area of investment in the Oracle Commerce platform , which continue to get stronger. We see this combination of capabilities as best meeting the needs of our customers for a commerce platform without adding a largely redundant, less functional PIM in the commerce front-end.  On the topic of web content management, we were pleased to see Forrester cite Oracle’s differentiated digital experience capability in this area and the “unique opportunity in the market to lead the convergence of commerce and content management with the amalgamation of Oracle Commerce with WebCenter Sites (formally FatWire).” Strong content management capabilities are critical for distributors and manufacturers who are frequently serving an engineering audience coming to their websites to conduct product research in search of technical data sheets, drawings, videos and more. The convergence of content, commerce, and experience is critical for B2B brands selling online. Regarding order management, Forrester notes that many businesses use their existing back-end enterprise resource planning (ERP) systems to manage order life cycles.  We hear the same from most of our B2B customers, as they already have an ERP system—if not several of them—and are not interested in yet another one. So what do we take away from the Wave results? Forrester notes that the Oracle Commerce Platform “has always had strong B2B commerce capabilities and Oracle certainly has an exhaustive list of B2B customers using the solution.”  What makes us excited about developing leading B2B solutions are the close relationships with our customers and the clear opportunity in the market – which we'll address in an exciting new release planned for the next 12 months. Oracle has one of the world’s largest B2B customer bases, providing leading solutions across key business-to-business functions – from marketing, sales automation, and service to master data management, and ERP. To learn more about Oracle’s Commerce product vision and strategy, visit our website and check out these other B2B Commerce Resources: -       2013 B2B Commerce Trends Report -       B2B Commerce Whitepaper: Consumerization, Complexity, Change -       B2B Commerce Webcast: What Industry Trend Setters Do Right -       Internet Retailer, Web Drives Sales for B2B Companies -       Internet Retailer Article, The Web Means Business: B2B Companies Beef Up Their Websites,        borrowing from b2c retailers and breaking new ground -       Internet Retailer Article, B2B e-Commerce is poised for growth

    Read the article

  • ImportError: No module named _socket? WSGI Deployment into Apache

    - by Sxkaur
    I am using WSGI 3.3 for python 2.7.3 (32bit) for Apache 2.2. I got the binary WSGI from http://code.google.com/p/modwsgi/downloads/detail?name=mod_wsgi-win32-ap22py27-3.3.so. I have been trying to deploy an application but keep on receiving the ImportError: no module named _socket. I have included my wsgi and error logs. APACHE config: #LoadModule vhost_alias_module modules/mod_vhost_alias.so LoadModule wsgi_module modules/mod_wsgi.so <Directory C:/Users/xxxxd/Documents/cahd> AllowOverride None Options None Order deny,allow Allow from all </Directory> WSGIScriptAlias / C:/Users/xxxxd/Documents/cahd/cahd/django.wsgi import os, sys sys.path.append('C:/Users/xxxxd/Documents) sys.path.append('C:/Users/xxxxd/Documents/cahd/') os.environ['DJANGO_SETTINGS_MODULE'] = 'cahd.settings' import django.core.handlers.wsgi application = django.core.handlers.wsgi.WSGIHandler() The error was: [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] Traceback (most recent call last): [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1 ]File "C:/Users/xxxxd/Documents/cahd/django.wsgi", line 10, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] import django.core.handlers.wsgi [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\django\\Django-1.4.1\\django\\core\\handlers\\wsgi.py", line 8, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] from django import http [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\django\\Django-1.4.1\\django\\http\\__init__.py", line 11, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] from urllib import urlencode, quote [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\Python27\\Lib\\urllib.py", line 26, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] import socket [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] File "C:\\Python27\\Lib\\socket.py", line 47, in <module> [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] import _socket [Mon Nov 19 09:44:17 2012] [error] [client 127.0.0.1] ImportError: No module named _socket

    Read the article

  • PostgreSQL pg_hba.conf with "password" auth wouldn't work with PHP pg_connect?

    - by tftd
    I've recently experimented with the settings in pg_hba.conf. I read the PostgreSQL documentation and I though that the "password" auth method is what I want. There are many people that have access to the server PostgreSQL is working on so I don't want the "trust" method. So I changed it. But then PHP stopped working with the database. The message I get is "Warning: pg_connect(): Unable to connect to PostgreSQL server: FATAL: password authentication failed for user "myuser" in /my/path/to/connection/class.php on line 35". It is kind of strange because I can connect via phppgadmin without any problems and also I can connect from my home computer with psql - again without any problems. This is my pg_hba.conf: # TYPE DATABASE USER CIDR-ADDRESS METHOD # "local" is for Unix domain socket connections only local all all password # IPv4 local connections: host all all 127.0.0.1/32 password # IPv6 local connections: host all all ::1/128 password The connection string I'm using with pg_conenct is: $connect_string = "host=localhost port=5432 dbname=mydbname user=auser password=apassword"; $dbConnection = pg_connect($connection_string); Does anybody know why is this happening ? Did I misconfigured something ?

    Read the article

  • Anonymous function vs. separate named function for initialization in jquery

    - by Martin N.
    We just had some controversial discussion and I would like to see your opinions on the issue: Let's say we have some code that is used to initialize things when a page is loaded and it looks like this: function initStuff() { ...} ... $(document).ready(initStuff); The initStuff function is only called from the third line of the snippet. Never again. Now I would say: Usually people put this into an anonymous callback like that: $(document).ready(function() { //Body of initStuff }); because having the function in a dedicated location in the code is not really helping with readability, because with the call on ready() makes it obvious, that this code is initialization stuff. Would you agree or disagree with that decision? And why? Thank you very much for your opinion!

    Read the article

  • Apache httpd: Send error logs to syslog and local disk? Without touching /etc/syslog.conf?

    - by Stefan Lasiewski
    I have an Apache httpd 2.2 server. I want to log all messages using syslog, so that the requests are sent to our central syslog server. I also want to ensure that all log messages are sent to local disk, so that a sysadmin can have easy access to the log files on the local system. It is easy to send HTTP access logs to both the local disk and to syslog. One common method is: LogFormat "%V %h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined CustomLog logs/access_log combined CustomLog "|/usr/bin/logger -t httpd -i -p local4.info" combined But it is not easy to do this for error logs. The following configuration doesn't work, because the error logs only use the last ErrorLog stanza. The first ErrorLog stanza is ignored. ErrorLog logs/error_log ErrorLog syslog:local4.error How can I ensure that Apache errors logs are written to the local disk and are sent to syslog? Is it possible to do this without touching /etc/syslog.conf ? I am fine if my users want to manage their own Apache configuration files, but I do not want them touching system files such as /etc/syslog.conf

    Read the article

  • Nautilus-Action conf. tool - crafting a "set as background" action

    - by EgyptBeast
    I wanted to create an option in the context menu to set the clicked picture to current desktop background (just like in Windows). I read the the nautilus action help but I couldn't figure it out. This is by far the command I could craft: gsettings set org.gnome.desktop.background picture-uri file://$PWD/ What I need: A command that correctly sets the current image to be the desktop background This command should only appear to the proper files (picture extenstions like .jpg)

    Read the article

  • Is it possible to mix a named pipe with select in perl?

    - by Haiyuan Zhang
    I need to write a daemon that supposed to have one TCP socket and one named pipe. Usually if I need to implement a multi IO server with "pure" sockets, the select based multi-IO model is always the one I will choose. so does anyone of you have ever used named pipe in select or you can just tell me it is impossible. thanks in advance.

    Read the article

  • How to create a named temporary file in memory?

    - by conradlee
    I would like to use Python's tempfile module to create a temporary file that I will use for communication between processes (use of pipes is awkward). The documentation I've linked to above shows two functions that almost do what I want: tempfile.NamedTemporaryFile # For creating named tempfiles tempfile.SpooledTemporaryFile # For creating tempfiles in memory but actually I want a tempfile that is both named AND in memory. Any ideas?

    Read the article

  • [Python] How to create a named temporary file in memory?

    - by conradlee
    I would like to use Python's tempfile module to create a temporary file that I will use for communication between processes (use of pipes is awkward). The documentation I've linked to above shows two functions that almost do what I want: tempfile.NamedTemporaryFile # For creating named tempfiles tempfile.SpooledTemporaryFile # For creating tempfiles in memory but actually I want a tempfile that is both named AND in memory. Any ideas?

    Read the article

  • dhclient.conf: Send 2x host-names to the DHCP server?

    - by RobM
    Already working: Debian box DHCP with send host-name me.company.com in dhclient.conf DNS updates automatically with an entry for me.company.com What I want to add: Send a second host-name, so both are automatically registered with DNS In other words: I want a DHCP client to register with DNS twice using different names, preferably without having to maintain DNS records manually. Is this even possible with DHCP?

    Read the article

  • What prevents an attack on Postfix through its named pipes?

    - by Met?Ed
    What prevents an attack on Postfix through its named pipes by writing bogus data to them? I see on my system that they permit write access to other. I wonder if that opens Postfix to DoS or some other form of attack. prw--w--w- 1 postfix postdrop 0 Nov 28 21:13 /var/spool/postfix/public/pickup prw--w--w- 1 postfix postdrop 0 Nov 28 21:13 /var/spool/postfix/public/qmgr I reviewed the pickup(8) man page, and searched here and elsewhere, but failed to turn up any answers.

    Read the article

  • How do I recover from pushing a gitosis.conf file with parsing errors due to line breaks?

    - by Kasia
    I have successfully set up gitosis for an Android mirror (containing multiple git repositories). While adding a new .git path following writable= in gitosis.conf I managed to insert a few line breaks. Saved, committed and pushed to server when I received the following parsing error: Traceback (most recent call last): File "/usr/bin/gitosis-run-hook", line 8, in load_entry_point('gitosis==0.2', 'console_scripts', 'gitosis-run-hook')() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 24, in run return app.main() File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/app.py", line 38, in main self.handle_args(parser, cfg, options, args) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 75, in handle_args post_update(cfg, git_dir) File "/usr/lib/python2.5/site-packages/gitosis-0.2-py2.5.egg/gitosis/run_hook.py", line 33, in post_update cfg.read(os.path.join(export, '..', 'gitosis.conf')) File "/usr/lib/python2.5/ConfigParser.py", line 267, in read self._read(fp, filename) File "/usr/lib/python2.5/ConfigParser.py", line 490, in _read raise e ConfigParser.ParsingError: File contains parsing errors: ./gitosis-export/../gitosis.conf (...) I have removed the line break and amendend the commit by git commit -m "fix linebreak" --amend However git push still yields the exact same error. It leads me to believe gitosis is preventing me from doing any further pushes. How do I recover from this?

    Read the article

  • solved: puppet master REST API returns 403 when running under passenger works when master runs from command line

    - by Anadi Misra
    I am using the standard auth.conf provided in puppet install for the puppet master which is running through passenger under Nginx. However for most of the catalog, files and certitifcate request I get a 403 response. ### Authenticated paths - these apply only when the client ### has a valid certificate and is thus authenticated # allow nodes to retrieve their own catalog path ~ ^/catalog/([^/]+)$ method find allow $1 # allow nodes to retrieve their own node definition path ~ ^/node/([^/]+)$ method find allow $1 # allow all nodes to access the certificates services path ~ ^/certificate_revocation_list/ca method find allow * # allow all nodes to store their reports path /report method save allow * # unconditionally allow access to all file services # which means in practice that fileserver.conf will # still be used path /file allow * ### Unauthenticated ACL, for clients for which the current master doesn't ### have a valid certificate; we allow authenticated users, too, because ### there isn't a great harm in letting that request through. # allow access to the master CA path /certificate/ca auth any method find allow * path /certificate/ auth any method find allow * path /certificate_request auth any method find, save allow * path /facts auth any method find, search allow * # this one is not stricly necessary, but it has the merit # of showing the default policy, which is deny everything else path / auth any Puppet master however does not seems to be following this as I get this error on client [amisr1@blramisr195602 ~]$ sudo puppet agent --no-daemonize --verbose --server bangvmpllda02.XXXXX.com [sudo] password for amisr1: Starting Puppet client version 3.0.1 Warning: Unable to fetch my node definition, but the agent run will continue: Warning: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /certificate_revocation_list/ca [find] at :110 Info: Retrieving plugin Error: /File[/var/lib/puppet/lib]: Failed to generate additional resources using 'eval_generate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [search] at :110 Error: /File[/var/lib/puppet/lib]: Could not evaluate: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Could not retrieve file metadata for puppet://devops.XXXXX.com/plugins: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /file_metadata/plugins [find] at :110 Error: Could not retrieve catalog from remote server: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /catalog/blramisr195602.XXXXX.com [find] at :110 Using cached catalog Error: Could not retrieve catalog; skipping run Error: Could not send report: Error 403 on SERVER: Forbidden request: XX.XXX.XX.XX(XX.XXX.XX.XX) access to /report/blramisr195602.XXXXX.com [save] at :110 and the server logs show XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/certificate_revocation_list/ca? HTTP/1.1" 403 102 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadatas/plugins?links=manage&recurse=true&&ignore=---+%0A++-+%22.svn%22%0A++-+CVS%0A++-+%22.git%22&checksum_type=md5 HTTP/1.1" 403 95 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:52 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "POST /production/catalog/blramisr195602.XXXXX.com HTTP/1.1" 403 106 "-" "Ruby" XX.XXX.XX.XX - - [10/Dec/2012:14:46:53 +0530] "PUT /production/report/blramisr195602.XXXXX.com HTTP/1.1" 403 105 "-" "Ruby" thefile server conf file is as follows (and goin by what they say on puppet site, It is better to regulate access in auth.conf for reaching file server and then allow file server to server all) [files] path /apps/puppet/files allow * [private] path /apps/puppet/private/%H allow * [modules] allow * I am using server and client version 3 Nginx has been compiled using the following options nginx version: nginx/1.3.9 built by gcc 4.4.6 20120305 (Red Hat 4.4.6-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/apps/nginx --conf-path=/apps/nginx/nginx.conf --pid-path=/apps/nginx/run/nginx.pid --error-log-path=/apps/nginx/logs/error.log --http-log-path=/apps/nginx/logs/access.log --with-http_ssl_module --with-http_gzip_static_module --add-module=/usr/lib/ruby/gems/1.8/gems/passenger-3.0.18/ext/nginx --add-module=/apps/Downloads/nginx/nginx-auth-ldap-master/ and the standard nginx puppet master conf server { ssl on; listen 8140 ssl; server_name _; passenger_enabled on; passenger_set_cgi_param HTTP_X_CLIENT_DN $ssl_client_s_dn; passenger_set_cgi_param HTTP_X_CLIENT_VERIFY $ssl_client_verify; passenger_min_instances 5; access_log logs/puppet_access.log; error_log logs/puppet_error.log; root /apps/nginx/html/rack/public; ssl_certificate /var/lib/puppet/ssl/certs/bangvmpllda02.XXXXXX.com.pem; ssl_certificate_key /var/lib/puppet/ssl/private_keys/bangvmpllda02.XXXXXX.com.pem; ssl_crl /var/lib/puppet/ssl/ca/ca_crl.pem; ssl_client_certificate /var/lib/puppet/ssl/certs/ca.pem; ssl_ciphers SSLv2:-LOW:-EXPORT:RC4+RSA; ssl_prefer_server_ciphers on; ssl_verify_client optional; ssl_verify_depth 1; ssl_session_cache shared:SSL:128m; ssl_session_timeout 5m; } Puppet is picking up the correct settings from the files mentioned because config print command points to /etc/puppet [amisr1@bangvmpllDA02 puppet]$ sudo puppet config print | grep conf async_storeconfigs = false authconfig = /etc/puppet/namespaceauth.conf autosign = /etc/puppet/autosign.conf catalog_cache_terminus = store_configs confdir = /etc/puppet config = /etc/puppet/puppet.conf config_file_name = puppet.conf config_version = "" configprint = all configtimeout = 120 dblocation = /var/lib/puppet/state/clientconfigs.sqlite3 deviceconfig = /etc/puppet/device.conf fileserverconfig = /etc/puppet/fileserver.conf genconfig = false hiera_config = /etc/puppet/hiera.yaml localconfig = /var/lib/puppet/state/localconfig name = config rest_authconfig = /etc/puppet/auth.conf storeconfigs = true storeconfigs_backend = puppetdb tagmap = /etc/puppet/tagmail.conf thin_storeconfigs = false I checked the firewall rules on this VM; 80, 443, 8140, 3000 are allowed. Do I still have to tweak any specifics to auth.conf for getting this to work? Update I added verbose logging to the puppet master and restarted nginx; here's the additional info I see in logs Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Could not resolve 10.209.47.31: no name for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 access[/] (info): defaulting to no access for 10.209.47.31 Mon Dec 10 18:19:15 +0530 2012 Puppet (warning): Denying access: Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 Mon Dec 10 18:19:15 +0530 2012 Puppet (err): Forbidden request: 10.209.47.31(10.209.47.31) access to /file_metadata/plugins [find] at :111 10.209.47.31 - - [10/Dec/2012:18:19:15 +0530] "GET /production/file_metadata/plugins? HTTP/1.1" 403 93 "-" "Ruby" On the agent machine facter fqdn and hostname both return a fully qualified host name [amisr1@blramisr195602 ~]$ sudo facter fqdn blramisr195602.XXXXXXX.com I then updated the agent configuration to add dns_alt_names = 10.209.47.31 cleaned all certificates on master and agent and regenerated the certificates and signed them on master using the option --allow-dns-alt-names [amisr1@bangvmpllDA02 ~]$ sudo puppet cert sign blramisr195602.XXXXXX.com Error: CSR 'blramisr195602.XXXXXX.com' contains subject alternative names (DNS:10.209.47.31, DNS:blramisr195602.XXXXXX.com), which are disallowed. Use `puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com` to sign this request. [amisr1@bangvmpllDA02 ~]$ sudo puppet cert --allow-dns-alt-names sign blramisr195602.XXXXXX.com Signed certificate request for blramisr195602.XXXXXX.com Removing file Puppet::SSL::CertificateRequest blramisr195602.XXXXXX.com at '/var/lib/puppet/ssl/ca/requests/blramisr195602.XXXXXX.com.pem' however, that doesn't help either; I get same errors as before. Not sure why in the logs it shows comparing access rules by IP and not hostname. Is there any Nginx configuration to change this behavior?

    Read the article

  • DNS server not functioning correctly

    - by Shamit Shrestha
    I have setup a DNS server which isnt working properly. My domain is accswift.com which has glued to two name servers ns1.accswift.com and ns2.accswift.com for the same IP address - 203.78.164.18. On domain end everything should be fine. Please check -http://www.intodns.com/accswift.com I am sure its the problem with the linux server. Can anyone help me find where the problem is for me? Below is the settings that I have in the server. ====================== DIG [root@accswift ~]# dig accswift.com ; << DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 << accswift.com ;; global options: +cmd ;; Got answer: ;; -HEADER<<- opcode: QUERY, status: NOERROR, id: 11275 ;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2 ;; QUESTION SECTION: ;accswift.com. IN A ;; ANSWER SECTION: accswift.com. 38400 IN A 203.78.164.18 ;; AUTHORITY SECTION: accswift.com. 38400 IN NS ns1.accswift.com. accswift.com. 38400 IN NS ns2.accswift.com. ;; ADDITIONAL SECTION: ns1.accswift.com. 38400 IN A 203.78.164.18 ns2.accswift.com. 38400 IN A 203.78.164.18 ;; Query time: 1 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Wed Nov 6 20:12:16 2013 ;; MSG SIZE rcvd: 114 ============== IP Tables settings vi /etc/sysconfig/iptables *filter :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A FORWARD -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A FORWARD -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A OUTPUT -o eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_OUT: -A INPUT -i eth0 -j LOG --log-level 7 --log-prefix BANDWIDTH_IN: -A INPUT -p udp -m udp --sport 53 -j ACCEPT -A OUTPUT -p udp -m udp --dport 53 -j ACCEPT COMMIT Completed on Fri Sep 20 04:20:33 2013 Generated by webmin *mangle :FORWARD ACCEPT [0:0] :INPUT ACCEPT [0:0] :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT Completed Generated by webmin *nat :OUTPUT ACCEPT [0:0] :PREROUTING ACCEPT [0:0] :POSTROUTING ACCEPT [0:0] COMMIT ====DNS settings vi /var/named/accswift.com.host $ttl 38400 @ IN SOA ns1.accswift.com. root.ns1.accswift.com. ( 1382936091 10800 3600 604800 38400 ) @ IN NS ns1.accswift.com. @ IN NS ns2.accswift.com. accswift.com. IN A 203.78.164.18 accswift.com. IN NS ns1.accswift.com. www.accswift.com. IN A 203.78.164.18 ftp.accswift.com. IN A 203.78.164.18 m.accswift.com. IN A 203.78.164.18 ns1 IN A 203.78.164.18 ns2 IN A 203.78.164.18 localhost.accswift.com. IN A 127.0.0.1 webmail.accswift.com. IN A 203.78.164.18 admin.accswift.com. IN A 203.78.164.18 mail.accswift.com. IN A 203.78.164.18 accswift.com. IN MX 5 mail.accswift.com. ====Named.conf vi /etc/named.conf options { listen-on port 53 { 127.0.0.1; }; listen-on-v6 port 53 { ::1; }; directory "/var/named"; dump-file "/var/named/data/cache_dump.db"; statistics-file "/var/named/data/named_stats.txt"; memstatistics-file "/var/named/data/named_mem_stats.txt"; allow-query { any; }; recursion yes; allow-recursion { localhost; 192.168.2.0/24; }; dnssec-enable yes; dnssec-validation yes; dnssec-lookaside auto; /* Path to ISC DLV key */ bindkeys-file "/etc/named.iscdlv.key"; managed-keys-directory "/var/named/dynamic"; forward first; forwarders {192.168.1.1;}; }; logging { channel default_debug { file "data/named.run"; severity dynamic; }; }; zone "." IN { type hint; file "named.ca"; }; include "/etc/named.rfc1912.zones"; include "/etc/named.root.key"; zone "accswift.com" { type master; file "/var/named/accswift.com.hosts"; allow-transfer { 127.0.0.1; localnets; 208.73.211.69; }; }; zone "ns1.accswift.com" { type master; file "/var/named/ns1.accswift.com.hosts"; }; ==================================== Can anybody find any flaw in this? I am still unable to reach accswift.com from any other ISP. But it is browsable from the same network though. Thanks in advance.

    Read the article

  • Configuring Multi-Tap on Synaptics Touchpad

    - by nunos
    I am having a hard time configuring my notebook's touchpad. The touchpad already works. It successfully responds to one-finger tap, two-finger tap and two-finger vertical scrolling. What I want to accomplish: change two-finger tap action from right-mouse click to middle-mouse click add three-finger tap functionality to yield right-mouse click action (i have checked that the three-finger tap is supported by my laptop's touchpad since it works on Windows) I read on a forum to use this as a guide. I have successfully accomplished point 1 with synclient TapButton2=2. However, I have to do it everytime I log in. I have tried to put that command on /etc/rc.local but the computer always boots and logins with the default configuration. Regarding point 2, I have tried synclient TapButton3=3 but it doesn't do anything when I three-finger tap the touchpad. I am running Ubuntu 11.10 on an Asus N82JV. /etc/X11/xorg.conf: nuno@mozart:~$ cat /etc/X11/xorg.conf Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Option "TapButton1" "1" Option "TapButton2" "2" Option "TapButton3" "3" EndSection /usr/share/X11/xorg.conf.d/50-synaptics.conf: nuno@mozart:~$ cat /usr/share/X11/xorg.conf.d/50-synaptics.conf # Example xorg.conf.d snippet that assigns the touchpad driver # to all touchpads. See xorg.conf.d(5) for more information on # InputClass. # DO NOT EDIT THIS FILE, your distribution will likely overwrite # it when updating. Copy (and rename) this file into # /etc/X11/xorg.conf.d first. # Additional options may be added in the form of # Option "OptionName" "value" # Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Option "TapButton1" "1" Option "TapButton2" "2" Option "TapButton3" "3" EndSection xinput list: nuno@mozart:~$ xinput list ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Microsoft Microsoft® Nano Transceiver v2.0 id=12 [slave pointer (2)] ? ? Microsoft Microsoft® Nano Transceiver v2.0 id=13 [slave pointer (2)] ? ? ETPS/2 Elantech Touchpad id=16 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? USB2.0 2.0M UVC WebCam id=10 [slave keyboard (3)] ? Microsoft Microsoft® Nano Transceiver v2.0 id=11 [slave keyboard (3)] ? Asus Laptop extra buttons id=14 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=15 [slave keyboard (3)]

    Read the article

  • Configuring Multi-Tap on Synpaptics Touchpad

    - by nunos
    I am having a hard time configuring my notebook's touchpad. The touchpad already works. It succesfully responds to one-finger tap, two-finger tap and two-finger vertical scrolling. What I want to accomplish: change two-finger tap action from right-mouse click to middle-mouse click add three-finger tap functionality to yield right-mouse click action I read on a forum to use this as a guide. I have succesfully accomplished point 1 with synclient TapButton2=2. However, I have to do it everytime I log in. I have tried to put that command on /etc/rc.local but the computer always boots and logins with the default configuration. Regarding point 2, I have tried synclient TapButton3=3 but it doesn't do anything when I three-finger tap the touchpad. I am running Ubuntu 11.10 on an Asus N82JV. /etc/X11/xorg.conf: nuno@mozart:~$ cat /etc/X11/xorg.conf Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Option "TapButton1" "1" Option "TapButton2" "2" Option "TapButton3" "3" EndSection /usr/share/X11/xorg.conf.d/50-synaptics.conf: nuno@mozart:~$ cat /usr/share/X11/xorg.conf.d/50-synaptics.conf # Example xorg.conf.d snippet that assigns the touchpad driver # to all touchpads. See xorg.conf.d(5) for more information on # InputClass. # DO NOT EDIT THIS FILE, your distribution will likely overwrite # it when updating. Copy (and rename) this file into # /etc/X11/xorg.conf.d first. # Additional options may be added in the form of # Option "OptionName" "value" # Section "InputClass" Identifier "touchpad catchall" Driver "synaptics" MatchIsTouchpad "on" MatchDevicePath "/dev/input/event*" Option "TapButton1" "1" Option "TapButton2" "2" Option "TapButton3" "3" EndSection xinput list: nuno@mozart:~$ xinput list ? Virtual core pointer id=2 [master pointer (3)] ? ? Virtual core XTEST pointer id=4 [slave pointer (2)] ? ? Microsoft Microsoft® Nano Transceiver v2.0 id=12 [slave pointer (2)] ? ? Microsoft Microsoft® Nano Transceiver v2.0 id=13 [slave pointer (2)] ? ? ETPS/2 Elantech Touchpad id=16 [slave pointer (2)] ? Virtual core keyboard id=3 [master keyboard (2)] ? Virtual core XTEST keyboard id=5 [slave keyboard (3)] ? Power Button id=6 [slave keyboard (3)] ? Video Bus id=7 [slave keyboard (3)] ? Video Bus id=8 [slave keyboard (3)] ? Sleep Button id=9 [slave keyboard (3)] ? USB2.0 2.0M UVC WebCam id=10 [slave keyboard (3)] ? Microsoft Microsoft® Nano Transceiver v2.0 id=11 [slave keyboard (3)] ? Asus Laptop extra buttons id=14 [slave keyboard (3)] ? AT Translated Set 2 keyboard id=15 [slave keyboard (3)]

    Read the article

  • compiling openss7

    - by deddihp
    hello, i got an error while compiling openss7. Do you know what happen ? Thanks.... gcc -DHAVE_CONFIG_H -I. -I. -I. -DLFS=1 -imacros ./config.h -imacros ./include/sys/config.h -I. -I./include -I./include -nostdinc -iwithprefix include -DLINUX -D__KERNEL__ -I/usr/src/linux-headers-lbm-2.6.28-11-generic -I/lib/modules/2.6.28-11-generic/build/include -Iinclude2 -I/lib/modules/2.6.28-11-generic/build/include -I/lib/modules/2.6.28-11-generic/build/arch/x86/include -include /lib/modules/2.6.28-11-generic/build/include/linux/autoconf.h -Iubuntu/include -I/lib/modules/2.6.28-11-generic/build/ubuntu/include -I/lib/modules/2.6.28-11-generic/build/arch/x86/include/asm/mach-default '-DKBUILD_STR(s)=#s' '-DKBUILD_BASENAME=KBUILD_STR('`echo libLfS_specfs_a-specfs.o | sed -e 's,lib.*_a-,,;s,\.o,,;s,-,_,g'`')' -DMODULE -D__NO_VERSION__ -DEXPORT_SYMTAB -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -O2 -m32 -msoft-float -mregparm=3 -freg-struct-return -mpreferred-stack-boundary=2 -march=i586 -mtune=generic -Wa,-mtune=generic32 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -fno-stack-protector -fno-omit-frame-pointer -fno-optimize-sibling-calls -Wdeclaration-after-statement -Wno-pointer-sign -fwrapv -ffreestanding -c -o libLfS_specfs_a-specfs.o `test -f 'src/kernel/specfs.c' || echo './'`src/kernel/specfs.c In file included from src/kernel/specfs.c:123: src/kernel/strspecfs.c: In function ‘specfs_init_cache’: src/kernel/strspecfs.c:1406: warning: passing argument 5 of ‘kmem_cache_create’ from incompatible pointer type src/kernel/strspecfs.c:1406: error: too many arguments to function ‘kmem_cache_create’ In file included from src/kernel/specfs.c:126: src/kernel/strlookup.c: In function ‘cdev_lookup’: src/kernel/strlookup.c:508: warning: format not a string literal and no format arguments src/kernel/strlookup.c:514: warning: format not a string literal and no format arguments src/kernel/strlookup.c:521: warning: format not a string literal and no format arguments src/kernel/strlookup.c: In function ‘cdrv_lookup’: src/kernel/strlookup.c:562: warning: format not a string literal and no format arguments src/kernel/strlookup.c: In function ‘fmod_lookup’: src/kernel/strlookup.c:604: warning: format not a string literal and no format arguments src/kernel/strlookup.c: In function ‘cdev_search’: src/kernel/strlookup.c:709: warning: format not a string literal and no format arguments src/kernel/strlookup.c:716: warning: format not a string literal and no format arguments src/kernel/strlookup.c: In function ‘fmod_search’: src/kernel/strlookup.c:768: warning: format not a string literal and no format arguments src/kernel/strlookup.c: In function ‘cmin_search’: src/kernel/strlookup.c:823: warning: format not a string literal and no format arguments src/kernel/strlookup.c:830: warning: format not a string literal and no format arguments src/kernel/strlookup.c:840: warning: format not a string literal and no format arguments src/kernel/strlookup.c:848: warning: format not a string literal and no format arguments In file included from src/kernel/specfs.c:129: src/kernel/strattach.c: In function ‘check_mnt’: src/kernel/strattach.c:131: error: ‘struct vfsmount’ has no member named ‘mnt_namespace’ src/kernel/strattach.c:131: error: ‘struct task_struct’ has no member named ‘namespace’ src/kernel/strattach.c: In function ‘do_fattach’: src/kernel/strattach.c:200: error: ‘struct nameidata’ has no member named ‘dentry’ src/kernel/strattach.c:200: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:200: error: ‘struct nameidata’ has no member named ‘dentry’ src/kernel/strattach.c:203: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:208: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:208: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:208: error: ‘struct nameidata’ has no member named ‘dentry’ src/kernel/strattach.c:226: error: implicit declaration of function ‘path_release’ src/kernel/strattach.c: In function ‘do_fdetach’: src/kernel/strattach.c:253: error: ‘struct nameidata’ has no member named ‘dentry’ src/kernel/strattach.c:253: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:255: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:257: error: ‘struct nameidata’ has no member named ‘dentry’ src/kernel/strattach.c:262: error: ‘struct nameidata’ has no member named ‘mnt’ src/kernel/strattach.c:265: error: ‘struct nameidata’ has no member named ‘mnt’ In file included from src/kernel/specfs.c:132: src/kernel/strpipe.c: In function ‘do_spipe’: src/kernel/strpipe.c:372: warning: assignment discards qualifiers from pointer target type make[4]: *** [libLfS_specfs_a-specfs.o] Error 1 make[4]: Leaving directory `/home/deddihp/dev/source/openss7-0.9.2.G/streams-0.9.2.4' make[3]: *** [all-recursive] Error 1 make[3]: Leaving directory `/home/deddihp/dev/source/openss7-0.9.2.G/streams-0.9.2.4' make[2]: *** [all] Error 2 make[2]: Leaving directory `/home/deddihp/dev/source/openss7-0.9.2.G/streams-0.9.2.4' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory `/home/deddihp/dev/source/openss7-0.9.2.G' make: *** [all] Error 2

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >