Search Results

Search found 9166 results on 367 pages for 'tweak ui'.

Page 166/367 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • How to use Chrome to access Oracle Applications

    - by Pan.Tian
    So far,Chrome browser is still not certified by Oracle Application, so you can not access Oracle Apps via Chrome.But you can install a chrome Extension to access Oracle Application(It works fine for 11i or R12 instance) Chrome Plugin:Oracle EBS R12 Enablement for Chrome You can use this Chrome Extension to log in Oracle E-Business R12 Form UI without FRM-92129(or FRM-92120) error which say the file Registry.dat lost. Plugin Author: zorrofox (see:http://www.itpub.net/forum.php?mod=viewthread&tid=1724526&page=1#pid20409128)

    Read the article

  • Internal Mic not working on Dell Adamo 13

    - by AFD
    So I'm using Elementary Luna Beta (based on Ubuntu 12.04 LTS) on a spare HDD after successfully using Jupiter release (based on 10.10) for many years. In Jupiter I had my internal mic work out of the box and with Skype installed from deb it was all setup without having to step foot inside the terminal. In Luna Beta the internal mic is not recognised by the OS and so also not recognised by Skype. I believe the hardware is this (from sudo lshw): *-multimedia description: Audio device product: 82801I (ICH9 Family) HD Audio Controller vendor: Intel Corporation physical id: 1b bus info: pci@0000:00:1b.0 version: 03 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list configuration: driver=HDA Intel latency=0 resources: irq:46 memory:f8600000-f8603fff As well as this I ran cat /proc/asound/card0/codec* | grep Codec which gave me: Codec: IDT 92HD73C1X5 Codec: Intel Cantiga HDMI How do I tweak Luna to get this hardware working properly? I'm able to switch out my Luna HDD with the Jupiter HDD to help troubleshoot what the differences are between the two and why the more recent OS can't find / use the mic correctly. Thanks in advance for any help you can give.

    Read the article

  • How do I restore the original color scheme, icons, and theme?

    - by katya sehgal
    I'd like the original colour scheme, icon style of 12.04. I somehow lost the Ambiance theme (possible error or upgrade error). I re-installed 'light-themes' from the terminal and got it back. But the panel on the top that shows the options of sound, battery and wi-fi has changed and I can-not get the original setting back. In the windows, the close, minimize tools have shifted to the right instead of the original left side. I had installed MyUnity and Ubuntu Tweak but deleted them. As such, I want the original setting back. Kindly help me with the commands. I have searched for solutions; there are multiple and I need to be sure if I should follow the same. Kindly bear before marking duplicate. Discoveries: The appearance is gray and boxy as outlined here. Not sure same problem. Similar 'gray and boxy' article here. Desktop forgets theme. I have also tried the unity --reset command. It never completes. I gave it 20 minutes.

    Read the article

  • Skype 2.1beta for Linux and sound quality

    - by vava
    I've been using Skype 2.1beta for Linux with my bluetooth headset and quality of the sound is just awful. But not always though, if I call echo service, quality is acceptable, but when I call real people there's echo, sound is crippling, there's pauses, voice is unrecognizable, all sorts of quality problems in one call. If I use newest Skype under WIndows with the same headset to call to the same people, quality is more than normal. So, is there some settings I can tweak, like tell Skype which codec to use or maybe there's noise cancellation plugin for PulseAudio I can use or any other system setting I can try to play with?

    Read the article

  • OTN Article: The Enterprise Side of JavaFX (part 1 of 2)

    - by terrencebarr
    OTN just published part 1 of a series by Adam Bien on “The Enterprise Side of JavaFX”. In this article, learn how to use LightView to convert REST services into a bindable set of properties, using JavaFX, Glassfish, LightFish, and Maven. Sample code included. Part 2 will discuss the integration of a JavaServer Faces 2 UI with WebView. Cheers, – Terrence Filed under: Mobile & Embedded Tagged: glassfish, JavaFX

    Read the article

  • other computer in the network cannot connect to mysql database

    - by user28233
    I have a vb.net program that uses mysql as its database. And it works when the computer has wampservr installed. But the program gets an unhandled exception error when the computer where its running does not have a wampserver. The only thing that is installed in it is the mysql connector net. How do I make it work. I just want the two programs to access the same mysql database. I already opened port 20 by configuring firewall. Both in TCP and UDP. What do I do? Do I have to tweak the codes? Anyone in here who have tried this before?

    Read the article

  • other computer in the network cannot connect to mysql database

    - by user23950
    I have a vb.net program that uses mysql as its database. And it works when the computer has wampservr installed. But the program gets an unhandled exception error when the computer where its running does not have a wampserver. The only thing that is installed in it is the mysql connector net. How do I make it work. I just want the two programs to access the same mysql database. I already opened port 20 by configuring firewall. Both in TCP and UDP. What do I do? Do I have to tweak the codes? Anyone in here who have tried this before?

    Read the article

  • Webcast Q&A: Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter

    - by kellsey.ruppel
    Last Thursday we had the second webcast in our WebCenter in Action webcast series, "Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter, where customer Michael Chander from Qualcomm and Vince Casarez & Gourav Goyal from Oracle Partner Keste shared how Oracle WebCenter is powering Qualcomm’s externally facing website and providing a seamless experience for their customers. In case you missed it, here's a recap of the Q&A.   Mike Chandler, Qualcomm Q: Did you run into any issues when integrating all of the different applications together?A: Definitely, our main challenges were in the area of user provisioning and security propagation, all the standard stuff you might expect when hooking up SSO for authentication and authorization. In addition, we spent several iterations getting the UI’s in sync. While everyone was given the same digital material to build too, each team interpreted and implemented it their own way. Initially as a user navigated, if you were looking for it, you could slight variations in color or font or width , stuff like that. So we had to pull all the developers responsible for the UI together and get pixel level agreement on a lot of things so we could ensure seamless transitions across applications. Q: What has been the biggest benefit your end users have seen?A: Wow, there have been several. An SSO enabled environment was huge a win for our users. The portal application that this replaced had not really been invested in by the business. With this project, we had full business participation and backing, and it really showed in some key areas like the shopping experience. For example, while ordering in the previous site, the items did not have any pictures or really usable descriptions. A tremendous amount of work was done to try and make the site more intuitive and user friendly. Site performance has also drastically improved thanks to new hardware, improved database design, and of course the fact that ADF has made great strides in runtime performance. Q: Was there any resistance internally when implementing the solution? If so, how did you overcome that?A: Within a large company, I’m sure there is always going to be competition for large projects, as there was here. Once we got through the technical analysis and settled on the technology choices, it was actually no resistance to implementing the solution. This project was fully driven by the business with the aim of long term growth. I can confidently say that the fact that this project was given the utmost importance by both the business and IT really help put down any resistance that you would typically see while implementing a new solution. Q: Given the performance, what do you estimate to be the top end capacity of the system? A:I think our top end capacity is really only limited by our hardware. I’m comfortable saying we could grow 10x on our current hardware, both in terms of transactions and users. We can easily spin up new JVM instances if needed. We already use less JVM’s than we had planned. In addition, ADF is doing a very good job with his connection pooling and application module pooling, so we see a very good ratio of users connected to the systems vs db connections, without impacting performace. Q: What's the overview or summary of feedback from the users interacting with the site?A: Feedback has been overwhelmingly positive from both the business and our customers. They’re very happy with the new SSO environment , the new LAF, and the performance of the site. Of course, it’s not all roses. No matter what, there are always going to be people that don’t like the layout or the color scheme, etc. By and large though, customers are happy and the business is happy. Q: Can you describe the impressions about the site before and after the project within Qualcomm?A: Before the project, the site worked and people were using it, but most people were not happy with it. It was slow and tended to be a bit tempermental, for example a user would perform a transaction and the system would throw and unexpected error. The user could back up and retry the steps and things would work fine, so why didn’t work the first time?. From a UI perspective, we’d hear comments like it looked like it was built by a high school student.  Vince Casarez & Gourav Goyal, Keste Q: Did you run into any obstacles when implementing the solution?A: It's interesting some people call them "obstacles" on this project we just called them "dependencies".  There were both technical and business related dependencies that we had to work out. Mike points out the SSO dependencies and the coordination and synchronization between the teams to have a seamless login experience and a seamless end user experience.  There was also a set of dependencies on the User Acceptance testing to make sure that everyone understood the use cases for how the system would be used.  With a branching into a new market and trying to match a simple user experience as many consumer sites have today, there was always a tendency for the team members to provide their suggestions on how things could be simpler.  But with all the work up front on the user design and getting the business driving this set of experiences, this minimized the downstream suggestions that tend to distract a team.  In this case, all the work up front allowed us to enumerate the "dependencies" and keep the distractions to a minimum. Q: Was there a lot of custom work that needed to be done for this particular solution?A: The focus for this particular solution was really on the custom processes. The interesting thing is that with the data flows and the integration with applications, there are some pre-built integrations, but realistically for the process flow, we had to build those. The framework and tooling we used made things easier so we didn’t have to implement core functionality, like transitioning from screen to screen or from flow to flow. The design feature of Task Flows really helped speed the development and keep the component infrastructure in line with the dynamic processes.  Task flows and other elements like Skins are core to the infrastructure or technology stack of Oracle. This then allowed the team to center the project focus around the business flows and use cases to meet the core requirements and keep the project on time. Q: What do you think were the keys to success for rolling out WebCenter?A:  The 5 main keys to success were: 1) Sponsorship from the whole organization around this project from senior executive agreement, business owners driving functionality, and IT development alignment; 2) Upfront design planning and use case definition to clearly define the project scope and requirements; 3) Focussed development and project management aligned with the top level goals and drivers; 4) User acceptance and usability testing along the way to identify potential issues and direct resolution of the issues;  and 5) Constant prioritization of the issues for development to fix by the business.  It also helps to have great team chemistry and really smart people working on the project. If you missed the webcast, be sure to catch the replay to see a live demonstration of WebCenter in action!  Qualcomm Provides a Seamless Experience for Customers with Oracle WebCenter from Oracle WebCenter

    Read the article

  • Drag & Drop in APEX-Anwendungen realisieren

    - by Carsten Czarski
    In diesem Tipp stellen wir Ihnen vor, wie Sie Drag & Drop in Ihrer APEX-Anwendung realisieren können. Das ist gar nicht so aufwändig, wie man denke, denn das in APEX enthaltene jQuery UI erledigt den Löwenanteil der Arbeit, so dass nur noch wenige JavaScript-Aufrufe abzusetzen sind. Lesen Sie, wie Sie in Ihrer eigenen APEX-Anwendung Zeilen aus der Tabelle EMP per Drag & Drop zu Zeilen der Tabelle DEPT zuordnen können.

    Read the article

  • Configuring the iPlanet as web tier for Oracle WebCenter Content (UCM)

    - by Adao Junior
    If you are looking for configure the iPlanet as Web server/proxy to use with the Oracle WebCenter Content, you probably won’t found an specific documentation for that or will found some old complex notes related to the old 10gR3. This post will help you out with few simple steps. That’s the diagram of the test scenario, considering that you will deploy in production in an cluster environment. First you need the software, for our scenario you will need: - Oracle iPlanet Web Server 7.0.15+ (Installed) - Oracle WebCenter Content 11gR1 PS5 (Installed) - Oracle WebLogic Web Server Plugins 11g (1.1) - Supported JDK (Using Oracle Java JDK 7u4 for the test) - Certified Client OS - Certified Server OS (Using Oracle Solaris 11 for the test) - Certified Database (Using Oracle Database 11.2.0.3 for the test) Then the configuration: - Download the latest plugin: http://www.oracle.com/technetwork/middleware/ias/downloads/wls-plugins-096117.html - Extract the WLSPlugin11g-iPlanet7.0 in some folder, like <iPlanet_Home>/plugins/wls11 - Include the plugin reference to the magnus.conf: If Unix (Solaris or Linux), include the line: Init fn="load-modules" shlib="/apps/oracle/WebServer7/plugins/wls11/lib/mod_wl.so" If Windows, Include the line:        Init fn="load-modules" shlib="D:\\oracle\\WebServer7\\plugins\\wls11\\lib\\mod_wl.dll" - Include the proxy reference to the obj.conf of each instance: <Object name="weblogic" ppath="*/cs/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object>   <Object name="weblogic" ppath="*/_dav/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object>   <Object name="weblogic" ppath="*/_ocsh/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object>   <Object name="weblogic" ppath="*/adfAuthentication/*"> Service fn="wl-proxy" WebLogicCluster="wcc-node1:16201,wcc-node2:16202, wcc-node3:16203" </Object> If you are using an single node setup, change the Service fn=…. line to something like: Service fn="wl-proxy" WebLogicHost=<wcc-server> WebLogicPort=16200 With these configurations, your should have the WebCenter Content UI working with the iPlanet, test it. [http://<web-server>/cs/] With the UI working, the last step is to configure the WebDav: - Go to the iPlanet Admin Console (usually https://<web-server>:8989) - Go to Configurations >> [instance] >> Virtual Servers >> [Virtual Server] >> WebDAV: - Click New - Populate the URI with /cs/idcplg/webdav: - Select “Anyone (No Authentication)”, the wc Content will take care of the security: This will allow you to use the WebDav feature and the Desktop Integration Suite, including double-byte characters. Anothers iPlanet tunes could be done, I can cover in the next post related to the iPlanet. Cross-posted on the ContentrA.com Blog Related posts:  - Using a Web Proxy Server with WebCenter Family

    Read the article

  • Dynamically setting a value in XAML page:

    - by kaleidoscope
    This is find that I came across while developing the Silverlight screen for MSFT BPO Invoice project. Consider an instance wherein I am calling a xaml page as a popup in my parent xaml. And suppose we wanted to dynamically set a textbox field in the parent page with the  values that we select from the popup xaml. I tried the following approaches to achieve the above scenario: 1. Creating an object of the parent page within the popup xaml and initializing its textbox field.         ParentPage p = new ParentPage();         ParentPage.txtCompCode.Text = selectedValue; 2. Using App app = (App)Application.Current and storing the selected value in app. 3. Using IsolatedStorage All the above approaches failed to produce the desired effect since in the first case I did not want the parent page to get initialized over and over again and furthermore in all the approaches the value was not spontaneously rendered on the parent page. After a couple of trials and errors I decided to tweak the g.cs file of the Parent xaml. *.g.cs files are autogenerated and their purpose is to wire xaml-element with the code-behind file. This file is responsible for having reference to xaml-elements with the x:Name-property in code-behind. So I changed the access modifier of supposed textbox fields to 'static' and then directly set the value in popup xaml page as so: ParentPage.txtCompCode.Text = selectedValue; This seemed to work perfectly. We can access any xaml's g.cs file by going to the definition of InitializeComponent() present in the constructor of the xaml. PS: I may have failed to explore other more efficient ways of getting this done. So if anybody does find a better alternative please feel free to get back to me. Tinu

    Read the article

  • How do I re-enable the backlight?

    - by Scott Severance
    Since Oneiric, if I leave my machine (HP Mini 110 netbook) unattended and it goes into power-save mode, the backlight gets disabled. How can I turn it back on? Note that the keyboard backlight controls (Fn+F4 and Fn+F3) don't have any effect in this situation. I've already filed a bug, but filing a bug doesn't fix my problem. I tried this workaround posted in this bug report dealing with Acer laptops: sudo setpci -s 00:02.0 F4.B=0 However, if anything, that command makes things worse. In the general case, I can see a little bit if I'm in a dark room with a flashlight aimed just so. But after running setpci I can't see anything. And I find the setpci documentation to be utterly incomprehensible, so I don't know whether I need to tweak my command somehow or whether I'm completely barking up the wrong tree. Update: I've found a workaround: I'm now booting with the kernel parameter acpi=off. This disables power management, which prevents the machine from going into power saving mode and thus failing to come back up correctly. Of course, not having power management means that I can't use suspend or do anything to manage power other than powering it off (even then, I have to manually use the power switch). Also, it prevents me from using Unity 3D or Gnome Shell, forcing me into Unity 2C or Gnome Classic. So, I'd really like to be able to stop using this hack.

    Read the article

  • Interconnect nodes in a Java distributed infrastructure for tweet processing

    - by David Moreno García
    I'm working in a new version of an old project that I used to download and process user statuses from Twitter. The main problem of that project was its infrastructure. I used multiple instances of a java application (trackers) to download from Twitter given an specific task (basically terms to search for), connected with a central node (a web application) that had to process all tweets once per day and generate a new task for each trackers once each 15 minutes. The central node also had to monitor all trackers and enable/disable them under user petition. This, as I said, was too slow because I had multiple bottlenecks, so in this new version I want to improve the infrastructure and isolate all functionalities in specific nodes. I also need a good notification system to receive notifications for any node. So, in the next diagram I show the components that I'll need in this new version: As you can see, there are more nodes. Here are some notes about them: Dashboard: Controls trackers statuses and send a single task to each of them (under user request). The trackers will use this task until replaced with a new one (if done, not each 15 minutes like before). Search engine: I need to store all the tweets. They are firstly stored in a local database for each tracker but after that I'm thinking on using something like Elasticsearch to be able to do fast searches. Tweet processor: Just and isolated component with its own database (maybe something like the search engine to have fast access to info generated by the module). In the future more could be added. Application UI: A web application with a shared database with the Dashboard (mainly to store users information and preferences). Indeed, both could be merged into a single web. The main difference with the previous version of the project is that now they will be isolated and they will only show information and send requests. I will not do any heavy task in them (like process tweets as I did before). So, having this components, my main headache is how to structure all to not have to rewrite a lot of code every time I need to access any new data. Another headache is how can I interconnect nodes. I could use sockets but that is a pain in the ass. Maybe a REST layer? And finally, if all the nodes are isolated, how could I generate notifications for each user which info is only in the database used by the Application UI? I'm programming this using Java and Spring (at least I used them in the last version) but I have no problems with changing the language if I can take advantage of a tool/library/engine to make my life easier and have a better platform. Any comment will be appreciated.

    Read the article

  • Nginx Subdomain Problem

    - by user292299
    i can't access my subdomain on localhost. my localdomain is localhost.dev and it's work.but i want to auto subdomain for php script (username.localhost.dev) i try this server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev ***.localhost.dev**; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } it's not working.i change server_name for testing server_name localhost.dev asd.localhost.dev; i can't access asd.localhost.dev and i try this double server{} section # You may add here your # server { # ... # } # statements for each of your virtual hosts to this file ## # You should look at the following URL's in order to grasp a solid understanding # of Nginx configuration files in order to fully unleash the power of Nginx. # http://wiki.nginx.org/Pitfalls # http://wiki.nginx.org/QuickStart # http://wiki.nginx.org/Configuration # # Generally, you will want to move this file somewhere, and start with a clean # file but keep this around for reference. Or just disable in sites-enabled. # # Please see /usr/share/doc/nginx-doc/examples/ for more detailed examples. ## server { listen 80 default_server; listen [::]:80 default_server ipv6only=on; access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } ############################### server { access_log /var/www/access.log; error_log /var/www/error.log; root /var/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name asd.localhost.dev; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules } location /f2/public/ { try_files $uri $uri/ /f2/public/index.php?$args; } location /doc/ { alias /usr/share/doc/; autoindex on; allow 127.0.0.1; allow ::1; deny all; } # Only for nginx-naxsi used with nginx-naxsi-ui : process denied requests #location /RequestDenied { # proxy_pass http://127.0.0.1:8080; #} #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # #error_page 500 502 503 504 /50x.html; #location = /50x.html { # root /usr/share/nginx/html; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { # fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: # fastcgi_pass unix:/var/run/php5-fpm.sock; # fastcgi_index index.php; # include fastcgi_params; include /etc/nginx/fastcgi_params; try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # root html; # index index.html index.htm; # # location / { # try_files $uri $uri/ =404; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # # root html; # index index.html index.htm; # # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # # ssl_session_timeout 5m; # # ssl_protocols SSLv3 TLSv1; # ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; # ssl_prefer_server_ciphers on; # # location / { # try_files $uri $uri/ =404; # } #} i can't success

    Read the article

  • How to wrap console utils in webserver

    - by Alex Brown
    I have a big dataset (100Mbs/day) and a bunch of console a TCL/TK tools to view it - I want to turn it into a web app that I can build, and others can maintain. In long: my group runs simulations yielding 100s of Mbs of data daily, in multiple (mostly but not only) text forms. We have a bunch of scripts and tools, mostly old school 1990's style stuff requiring a 5-button mouse, as well as lots of ad-hoc scripts that engineers build out of frustration every month or so. These produces UIs, graphs, spreadsheets (various sizes), logs, event histories etc. I want to replace (or at least supplement) the xwindows / console style UI with a web-based one, so I need the following properties: pleasant to program can wrap existing command-line tools in separate views (I don't need to scrape GUIs or anything) as I port logic from the existing scripts I can create a modularised and pleasant codebase to replace it I can attach a web-ui to navigate between views - each view is likely to contain keys which might make sense to view in another I am new to building systems that have logic on the back-end and front-end of a web-server. from that point of view, they do this: backend wraps old-school executables, constructs calls into them and them takes the output and wraps it up, niceifies it and delivers it to the web client. For instance the tool might generate a number of indexed images (per invocation) which I might deliver all at once or on-demand. May (probably) need to to heavy stats on some sources. frontend provides navigation connecting multiple views, performs requests from one view for data from another (or self to self), etc. Probably will have some views with a lot of interactivity. Can people please point me towards viable solutions for this? I know it's a bit of an open question so as answers come in I hope to refine the spec until we have a good match. I guess I expect to see answers like "RoR!" "beans!" "Scala!" but please give an indication of why those are a good fit; I know nothing! I got bumped off SO for asking an open-ended question, so sorry if its OT here too (let me know). I take the policy that I use the best/closest matched language for a project but most of my team are extremely low level (ie pipeline stages and CDyn) so I don't have the peer group to know where to start.

    Read the article

  • Only 192.168.0.3 can request most files, but anyone can request /public/file.html

    - by mattalexx
    I have the following virtual host on my development server: <VirtualHost *:80> ServerName example.com DocumentRoot /srv/web/example.com/pub <Directory /srv/web/example.com/pub> Order Deny,Allow Deny from all Allow from 192.168.0.3 </Directory> </VirtualHost> The Allow from 192.168.0.3 part is to only allow requests from my workstation machine. I want to tweak this to allow anyone to request a certain URL: http://example.com/public/file.html How do I change this to allow /public/file.html requests to get through from anyone? Note: /public/file.html doesn't actually exist as a file on the server. I redirect all incoming requests through a single index file using mod_rewrite.

    Read the article

  • Where is the network connection enabled/disabled setting stored?

    - by minerj
    I have an Amazon EC2 instance of Windows Server 2008 where some genius managed to disable the network connection so that the instance is now isolated in its own little universe. I can shut down the instance and edit the "C:\" drive volume by attaching it to another running instance. This is equivalent to removing the system drive from a dead machine and attaching it to another computer to edit the files. Question: Where is the network connection enabled / disabled setting stored? If I can tweak this setting by editing the registry or a file to re-enable the network connection, I can then resurrect my Amazon server.

    Read the article

  • Representing complex object dependencies

    - by max
    I have several classes with a reasonably complex (but acyclic) dependency graph. All the dependencies are of the form: class X instance contains an attribute of class Y. All such attributes are set during initialization and never changed again. Each class' constructor has just a couple parameters, and each object knows the proper parameters to pass to the constructors of the objects it contains. class Outer is at the top of the dependency hierarchy, i.e., no class depends on it. Currently, the UI layer only creates an Outer instance; the parameters for Outer constructor are derived from the user input. Of course, Outer in the process of initialization, creates the objects it needs, which in turn create the objects they need, and so on. The new development is that the a user who knows the dependency graph may want to reach deep into it, and set the values of some of the arguments passed to constructors of the inner classes (essentially overriding the values used currently). How should I change the design to support this? I could keep the current approach where all the inner classes are created by the classes that need them. In this case, the information about "user overrides" would need to be passed to Outer class' constructor in some complex user_overrides structure. Perhaps user_overrides could be the full logical representation of the dependency graph, with the overrides attached to the appropriate edges. Outer class would pass user_overrides to every object it creates, and they would do the same. Each object, before initializing lower level objects, will find its location in that graph and check if the user requested an override to any of the constructor arguments. Alternatively, I could rewrite all the objects' constructors to take as parameters the full objects they require. Thus, the creation of all the inner objects would be moved outside the whole hierarchy, into a new controller layer that lies between Outer and UI layer. The controller layer would essentially traverse the dependency graph from the bottom, creating all the objects as it goes. The controller layer would have to ask the higher-level objects for parameter values for the lower-level objects whenever the relevant parameter isn't provided by the user. Neither approach looks terribly simple. Is there any other approach? Has this problem come up enough in the past to have a pattern that I can read about? I'm using Python, but I don't think it matters much at the design level.

    Read the article

  • What can I do to enhance MacBook Pro internal mic sound quality in iMovie?

    - by gaearon
    After using MacBook for two years, I bought 17'' MacBook Pro. I'm pretty happy with it, performance and all, but I also was going to record some music videos for YouTube. I play guitar and sing. However I was extremely disappointed with the sound quality that comes by default. I'm 100% sure my 13'' MacBook mic was much better at recording music and singing. Currently mic can't event handle acoustic guitar, outputting sound you'd think was recorded 5 years ago in ARM format on a Nokia phone on a loud concert. It totally feels like some lame filter is cutting low and high frequencies. I want to know what settings (visible or hidden) in iMovie or Mac OS itself I might want to tweak in order to get my MBP mic record clean sound.

    Read the article

  • Windows 7 - problems launching default application

    - by Chris W
    Just built up a new W7 PC. I've noticed some strange issues with launching default applications. I've got Visual Studio & SQL Server Management Studio set run as administrator when launched. If i double click a .sql file SSMS opens ok but the file itself does not get loaded. If I do the same with a .sln then I get nothing at all from Visual Studio. For the latter I presume the UAC prompt is hidden somewhere waiting for me to say it's ok to launch the app but i've no idea what's happening with SSMS. Is this a W7 bug or are there some settings somewhere that I can tweak to improve this behaviour?

    Read the article

  • How to set up memcached to use unix socket?

    - by alfish
    While I could use memcached on Debian to use the default 11211 port, but I've had great difficulty setting up unix socket, Form what I'v read, I know that I need to create a memcache.socket and add -s /path/to/memcache.socket -a 0766 To /etc/memcached.conf and comment out the default connection port and IP, i.e. -p 11211 -l 127.0.0.1 However, when I restart memcached I get internal server errors on Drupal site. I'm trying to implement unix sockets to avoid TCP/IP overhead and boost overal memcached performance, however not sure how much performance gain one can expect of this tweak. I appreciate your hints or possibly configs to to resolve this.

    Read the article

  • Set a custom favicon locally, that carries across the entire site.

    - by Iszi
    Is there a way to add a custom favicon to an App Tab? In the above thread, @admintech links to a great plugin for changing favicons which covers both the bookmarks folder and the address bar/tab bar icons. However, it still does not quite fully address what I was hoping to accomplish. I'd like to set an App Tab that has a customized icon, that stays the same in that tab no matter what I do there. Since the navigation within an App Tab is very restricted, the chosen favicon should always be relevant to whatever page is loaded in that tab. The Bookmark Favicon Changer has been effective in allowing me to use a custom favicon in the App Tab. But, the favicon only applies to the specific URL that was bookmarked. Any navigation done from that page will return the favicon to blank. Is there another plugin, or perhaps some special tweak to this plugin or the bookmark itself, that will allow me to make the favicon more persistent across the site?

    Read the article

  • PAL–Performance Analysis of Logs

    - by GavinPayneUK
    I was doing some research earlier this week on SQL Server related troubleshooting tools and was surprised I’d forgotten about Microsoft’s PAL tool – Performance Analysis of Logs. PAL is a free PowerShell UI based tool from Microsoft that creates a perfmon template which can then be used to capture counters most relevant to a high-level performance review PAL will them give for specific Microsoft server deployments, SQL Server being one of them.  Everyone knows what perfmon does, probably too...(read more)

    Read the article

  • Where should I go to learn about networking? [closed]

    - by Ollie Saunders
    I wonder if anyone could recommend resource or resources such as a good book that: explains how all the important protocols work and interact. I’m interested in those that are relevant in a typical home network and used over the Internet explains in detail how ADSL Internet connections work to the level of depth necessary so that I’m able to tweak and measure performance settings starts from the beginning but attempts to provide proper understanding rather than idiot-oriented steps to follow Basically, I’m interested in how these technologies work and tend to be implemented in hardware and software rather than “here’s what to do if…” I’m interested in Computer Networking by Andrew S. Tanenbaum and I wonder if anyone else has any experience with that title. It’s expensive but I could probably loan a copy for £3 from the library or so.

    Read the article

  • Ubuntu 12.04 Faster boot, Hibernate & other questions

    - by Samarth Shukla
    I've recently started exploring Ubuntu (my 1st distro). I fresh installed precise without a swap (4GB ram). The only issues are, slow boot (regardless of the swap) and instability after a few days of installation. The runtime performance is immaculate otherwise. Even though not needed, I still set swappiness = 10. I've tried the quiet splash profile to GRUB; already have preload installed. But it still is pretty slow. I am not too confident on recompiling the kernel yet. But you could please advice me on that too. I've also added the following to fstab: #Move /tmp to RAM: tmpfs /tmp tmpfs defaults,noexec,nosuid 0 0 (Also if you could please tell me the exact implication/scope of this tweak on physical ram & the swap.) But nothing has happened really. So what alternatives are there to make it boot faster? Also, right after fresh install, though no swap partition, the system still showed /dev/zram0 of arond 2GB which was never used (probably because of the above fstab edit). Finally, I experimented with Hibernate a little, but many claim that it doesn't work on 12.04. (Not to mention, I made a swap file of 4GB for it). What I did was: sudo gedit /var/lib/polkit-1/localauthority/50-local.d/hibernate.pkla Then I added the following lines, saved the file, and closed the text editor: [Re-enable Hibernate] Identity=unix-user:* Action=org.freedesktop.upower.hibernate ResultActive=yes I also edited the upower policy for hibernate: gksudo gedit /usr/share/polkit-1/actions/org.freedesktop.upower.policy I added these lines: < allow_inactive >no< /allow_inactive > < allow_active >yes< /allow_active > But it did not work. So is there an alternate method perhaps that can make it work on 12.04?

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >