Search Results

Search found 15423 results on 617 pages for 'uses clause'.

Page 265/617 | < Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >

  • Fujitsu Raku-Raku SmartPhone: Japanese Digital Seniors UX Insight from @debralilley

    - by ultan o'broin
    Super blog posting on the super-important subject of digital inclusion by Oracle partner Fujitsu appstech maven and Oracle Applications User Experience FXA-er and ACE Director Debra Lilley (@debralilley). Debra tells us how Fujitsu is enabling digital inclusion for older mobile users in Japan with their  Raku-Raku (??????. ????)smart phone: Fujitsu Raku-Raku - My UX Homework (Raku-Raku means easy or comfortable in Japanese). There are UX mobile, social media, and methodology takeaways there for us in Debra's blog. Fujitsu Raku-Raku Smartphone Demo  I encourage you to read Debra's blog. In it, she makes reference to a tailored social media experience for those digital seniors (???????) as they'd be called in Japan (UK and Ireland uses the term silver surfers). You can find that online experience here. Online Community site for Fujitsu Raku-Raku Smartphone Digital Seniors (English translation via Google Translate) It's an important reminder that UX is global sure, but also that worldwide accessibility and digital inclusion are priorities too for UX. It's vital that we understand such aspects of technology adoption and how the requirements of different categories of technology users can be met. Oracle is committed to providing the best possible user experience for enterprise users of all ages and abilities. That means talking with all sorts of people worldwide and understanding how and why they want to use our technology and what their context of use is. You can read more about Oracle's accessibility program on our corporate website. Proud to say I prompted a few questions in Japan all the way from Ireland. So, UX is not only global but you can drive UX research globally too without ever leaving home. Brilliant job, Debra. Here's to more such joint research creativity and UX collaborations worldwide between us. Wondering where we might go next? And what a fun way to do things too!

    Read the article

  • ERP/CRM Systems. Desktop Based ? Web based?

    - by Parhs
    Hello guys... I have seen 2-3 ERPs in action. I am wondering what is better. Desktop based application or webbased displayed on a browser. My first expirience was with a web based ERP when i was 14 years old.. It was web based and terribly slow... For most simple task you had to do lots of clicks... no keyboard support ..... Pages took ages to load. Last year i worked for migrating to a newer computer some old terminal based cobol application. The computer that worked till today and still has no problem was from 1993. The user interface ofcourse was textbased.. The speed that guys placed orders was amazing! just typing the name of the customer , then 5-10 keys to add a product to order.... Comparing to this ERP the page for placing orders Link (click sales orders) seems terribly slow to add a product... No keyboard shortcut works to save what you added and generally i believe you need 4 times more time to place an order compared to the text interface... Having to use both mouse and keyboard for this task is BAD and sadistic... So how can tek heck these people ever use a system like that ??? So in the long run desktop application seems the only way... Ofcourse browsers support shortcuts but the way to overide the defaults that browsers uses isnt cross compatible... That is a hudge problem. Finnaly, if we MUST/forced use cloud in near future what about keyboard shortcuts?? I feel confused... I have seen converters of desktop applications to browser applications but are SLOW as hell... The question is what about user friendliness?What kind of application would you use?

    Read the article

  • LAN speeds and firewall/switch connections

    - by microchasm
    I have a small network with about ten users. All workstations flow into a Dell PowerConnect 3424 which then has a single link to a SonicWALL firewall and from there to a cable modem. More important than internet connectivity is speed between machines (specifically a Windows Server box on the LAN which everyone uses simultaneously). I believe the 3424 has gigabit connections, but they look like they're for stacking. Is there a way to test the speeds on the LAN to see where the speeds are at? Is there any low-hanging fruit insofar as increasing speeds?

    Read the article

  • Who spotted the omission?

    - by olaf.heimburger
    In my entry OFM 11g: Install OAM 10.1.4.3 (32-bit) on 64-bit RedHat AS 5 I explained how to install OAM 10.1.4.3 (32-bit) on 64-bit RedHat. This is great and works. If you seriously want to use OAM 10.1.4.3 you should consider OHS 11g 32-bit. But this installation is a bit tricky. Nearly all tricks to get this done are described in the above mentioned entry. Today I realized that I missed a small bit to get the installation successfully done.The missing part is within the script to create a vital piece of the OHS 11g package. This part is called genclientsh and resides in $OHS_HOME/bin. This script uses gcc to link binaries. By default this script works great, but on a 64-bit Linux it fails. To get around this, find the variable LD and change the value of gcc to gcc -m32.Done. Caveat On support.oracle.com you will find a Note that suggests to build a small shell script named gcc and includes the -m32 switch. Actually, I consider this as dangerous, because we are humans and tend to forget things quickly. Building a globally available script that changes things for a single setup has side effects that will result in unpredictable results.

    Read the article

  • A different interface for the Sql Server Reporting Service?

    - by AngryHacker
    I have a SQL Server 2005 SQL Reporting Services implementation. It seems that the only way to actually access the reports is for the users to use Internet Explorer. The web page uses an ActiveX control to do its printing (and probably other functions as well). Does SSRS have a different way to access its functionality via the web browser? Like maybe Java or HTML based? If so, how do I actually turn it on? The reason I am asking is because the security is being tightened and ActiveX controls will be banished, thus the users won't be able to print.

    Read the article

  • What could be causing frequent display freezes?

    - by austen
    I just installed Ubuntu 14.04 two days ago (coming from Win8) and in the two days that I've been using it, my display has frozen four or five times. The mouse won't move but the keyboard does respond so I can use the Ctrl+Alt+Bkspc command to fix it. It seems like it might just be the display freezing because one of the times I was watching a Youtube video and the audio continued playing. I have an Nvidia graphics card with the most recent Nvidia drivers for it enabled. I see that a lot of questions about Ubuntu freezing get marked as a duplicate and pretty much always linked back to a thread about what to do when it freezes. Clearly, I've got that bit figured out already and I did read that thread for further advice. What I'm looking for though is how to fix this permanently. output from lspci -nnk | grep -iA2 VGA: 00:02.0 VGA compatible controller [0300]: Intel Corporation 3rd Gen Core processor Graphics Controller [8086:0166] (rev 09) Subsystem: Lenovo Device [17aa:2200] Kernel driver in use: i915 Update: JohnnyEnglish pointed out that Ubuntu is using the integrated graphics, not my Nvidia card. It turns out my laptop uses Nvidia Optimus and I cannot enable only the graphics card through the BIOS. I found out about Nvidia Prime and got it set up using this article. The settings panel which allows you to select the graphics says that 'performance mode' is enabled but when I check which graphics controller is enabled through the terminal, it still says it's using the integrated graphics. I'm not sure if this could be causing the freezes but I guess it's a starting point. Any ideas on how to resolve this?

    Read the article

  • How do I compare binary files in Linux?

    - by frustratedCmpNoLongerUser
    I need to compare two binary files and get the output in the form <fileoffset-hex <file1-byte-hex <file2-byte-hex for every different byte. So if file1.bin is 00 90 00 11 in binary form and file2.bin is 00 91 00 10 I want to get something like 00000001 90 91 00000003 11 10 What is the easiest way to accomplish the goal? Standard tool? Some third-party tool? (Note: cmp -l should be killed with fire, it uses a decimal system for offsets and octal for bytes.)

    Read the article

  • Installing both lxml 3.1.2 and lxml2 on ubuntu 12.04

    - by wgw
    I asked this on SO: http://stackoverflow.com/questions/19852911/lxml-3-1-2-and-lxml2-both-on-ubuntu/19856674#19856674 But it is perhaps more appropriate for AskUbuntu. So here it is again, reformulated. On the lxml site they suggest that it is possible to have both lxml2 and the newest version of lxml on ubuntu: Using lxml with python-libxml2 If you want to use lxml together with the official libxml2 Python bindings (maybe because one of your dependencies uses it), you must build lxml statically. Otherwise, the two packages will interfere in places where the libxml2 library requires global configuration, which can have any kind of effect from disappearing functionality to crashes in either of the two. To get a static build, either pass the --static-deps option to the setup.py script, or run pip with the STATIC_DEPS or STATICBUILD environment variable set to true, i.e. STATIC_DEPS=true pip install lxml The STATICBUILD environment variable is handled equivalently to the STATIC_DEPS variable, but is used by some other extension packages, too. I am generally confused about how pip packages and ubuntu packages get along, so I hesitate to run STATIC_DEPS=true pip install lxml. Will it damage/confuse my installed lxml2 package? The suggestion on SO was to install the new lxml in a virtualenv. That looks like the best way to go, but the lxml site is suggesting that a dual installation will work also. In general: what happens if I use pip (to get a newer install) for a package that is already installed by apt-get?

    Read the article

  • PHP ORM style of querying

    - by Petah
    Ok so I have made an ORM library for PHP. It uses syntax like so: *(assume that $business_locations is an array)* Business::type(Business:TYPE_AUTOMOTIVE)-> size(Business::SIZE_SMALL)-> left_join(BusinessOwner::table(), BusinessOwner::business_id(), SQL::OP_EQUALS, Business::id())-> left_join(Owner::table(), SQL::OP_EQUALS, Owner::id(), BusinessOwner::owner_id())-> where(Business::location_id(), SQL::in($business_locations))-> group_by(Business::id())-> select(SQL::count(BusinessOwner::id()); Which can also be represented as: $query = new Business(); $query->set_type(Business:TYPE_AUTOMOTIVE); $query->set_size(Business::SIZE_SMALL); $query->left_join(BusinessOwner::table(), BusinessOwner::business_id(), SQL::OP_EQUALS, $query->id()); $query->left_join(Owner::table(), SQL::OP_EQUALS, Owner::id(), BusinessOwner::owner_id()); $query->where(Business::location_id(), SQL::in($business_locations)); $query->group_by(Business::id()); $query->select(SQL::count(BusinessOwner::id()); This would produce a query like: SELECT COUNT(`business_owners`.`id`) FROM `businesses` LEFT JOIN `business_owners` ON `business_owners`.`business_id` = `businesses`.`id` LEFT JOIN `owners` ON `owners`.`id` = `business_owners`.`owner_id` WHERE `businesses`.`type` = 'automotive' AND `businesses`.`size` = 'small' AND `businesses`.`location_id` IN ( 1, 2, 3, 4 ) GROUP BY `businesses`.`id` Please keep in mind that the syntax might not be prefectly correct (I only wrote this off the top of my head) Any way, what do you think of this style of querying? Is the first method or second better/clearer/cleaner/etc? What would you do to improve it?

    Read the article

  • Shell script with ImageMagick: hangs forever?

    - by AP257
    I've generated a shell script that uses ImageMagick to convert and crop around 18000 images. Here's a sample entry (so there are 18000 of these): if [ ! -f ./cropped/16333-1.png ] then convert -crop 724x118+876+1989 ./lin/34.png ./cropped/16333-1.png echo cropping 16333-1 fi if [ ! -f ./cropped/16333-1_thumb.png ] then convert -define jpeg:size=400x100 ./cropped/16333-1.png -thumbnail '400x100>' -background transparent -gravity center -extent 400x100 ./cropped/16333-1_thumb.png echo thumbing 16333-1 fi The script only runs for about 2000 images before hanging forever. Am I missing something, or leaking memory somewhere? Thanks for your help!

    Read the article

  • Robocopy hiding folders on backup drives

    - by Neil Barnwell
    I have a backup batch file that uses Robocopy to backup my files: robocopy "C:\" "G:\Default\RoboCopyBackup\C" /XF Pagefile.sys /XD "System Volume Information" "Recycler" "Temporary Internet Files" "Installer Cache" "Temp" /E /R:1 /W:0 /TEE /XJ This should create a folder structure on the external backup drive like so: G:\Default\RoboCopyBackup\C\... However, G: appears totally empty. What is weird, is that the folders and files are there! If I type the above path into the address bar, I see all the files and folders! Can anyone help me work out why? I think it might be some NTFS-based ownership/permissions thing but I'm not sure.

    Read the article

  • Does scheduling recycling app pool in IIS7 help the server conserve memory better?

    - by user29266
    Hello, I have a VPS (IIS7 with Win 2008) It's got: 40 websites and a SQL Server 2008 powering them with only 2 Gigs of RAM. None of the sites are mission critical, they are all just demos. I often have ram issues on the server because each site has does caching and generally uses a lot of memory. Would it make sense to set the application pools to recycle every 3 hours? I'm sure this would free up any memory leaks or processes left "hanging" Are there any other tips on this? Thank you very much!, Aron

    Read the article

  • mod_access for lighttpd causes a 403 error for all POST requests

    - by Sam
    I have found on my debian server that running the lighttpd module mod_access is causing the server to response with a 403 to all POST requests. It's very odd as I have two servers, one is running as I'd expect and the other keeps returning these 403's. They are running identical configs for lighttpd and php. My lighttpd.conf is: https://gist.github.com/4269500 There is also one other custom conf: https://gist.github.com/4269508 I've opened up the servers for requests until I get this fixed, the server that works is http://mercury.isitup.org/ and the one that fails is http://venus.isitup.org/. After working out that disabling mod_access resolves the problem I greped all my lighttpd configs for uses of it (docs). Disabling each line I found didn't help, leading me to think this is perhaps some default behaviour (or bug?)... Has anyone come across this before or know what configuration value I've got wrong? Versions Debian: Debian GNU/Linux 6.0.6 (squeeze) Lighttpd: lighttpd/1.4.28 (ssl) PHP: PHP 5.3.19-1~dotdeb.0 with Suhosin-Patch (cli)

    Read the article

  • Segmentation fault while switching QCompleter for QLineEdit [on hold]

    - by san
    I have a QLineEdit that uses autocompletion one which on focusIn event in which it shows paths from XML List(here I have used hardcoded list) but if user doesn't find the path from that list popped by QCompleter than I want user to be able to browse to path typing '/' in QLineEdit , I am not able to select the paths say /Users etc and on trying to type Segmentation fault occurs. from PyQt4.Qt import Qt, QObject,QLineEdit from PyQt4.QtCore import pyqtSlot,SIGNAL,SLOT from PyQt4 import QtGui, QtCore import sys class DirLineEdit(QLineEdit, QtCore.QObject): """docstring for DirLineEdit""" def __init__(self): super(DirLineEdit, self).__init__() self.defaultList = ['~/Development/python/searchMethod', '~/Development/Nuke_python', '~/Development/python/openexr', '~/Development/python/cpp2python'] self.textChanged.connect(self.__dirCompleter) def focusInEvent(self, event): if len(self.text()) == 0: self._pathsList() QtGui.QLineEdit.focusInEvent(self, event) self.completer().complete() def __dirCompleter(self): if len(self.text()) == 0: model = MyListModel(self.defaultList, self) completer = QtGui.QCompleter(model, self) completer.setModel(model) else: dirModel = QtGui.QFileSystemModel() dirModel.setRootPath(QtCore.QDir.currentPath()) dirModel.setFilter(QtCore.QDir.AllDirs | QtCore.QDir.NoDotAndDotDot | QtCore.QDir.Files) dirModel.setNameFilterDisables(0) completer = QtGui.QCompleter(dirModel, self) completer.setCaseSensitivity(QtCore.Qt.CaseInsensitive) completer.setModel(dirModel) self.setCompleter(completer) def _pathsList(self): completerList = QtCore.QStringList() for i in self.defaultList: completerList.append(QtCore.QString(i)) lineEditCompleter = QtGui.QCompleter(completerList) lineEditCompleter.setCompletionMode(QtGui.QCompleter.UnfilteredPopupCompletion) self.setCompleter(lineEditCompleter) class MyListModel(QtCore.QAbstractListModel): def __init__(self, datain, parent=None, *args): """ datain: a list where each item is a row """ QtCore.QAbstractTableModel.__init__(self, parent, *args) self.listdata = datain def rowCount(self, parent=QtCore.QModelIndex()): return len(self.listdata) def data(self, index, role): if index.isValid() and role == QtCore.Qt.DisplayRole: return QtCore.QVariant(self.listdata[index.row()]) else: return QtCore.QVariant() app = QtGui.QApplication(sys.argv) smObj = DirLineEdit() smObj.show() app.exec_() Please help fix this or suggest better way of implementation?

    Read the article

  • Internet keeps getting Disconnected & Reconnected

    - by Paul
    I have an Internet connection thru a Phone company which uses DSL modem to connect. For past few weeks my net connection keeps getting disconnected for few seconds & it reconnects automatically. I asked the service provider & they checked & even replaced some wires but of no help. Funny thing is that sometimes there seems to be a rhythm where it disconnects for an average of 13 seconds or so plus or minus few seconds, for hours at a time. I don't know how to attach an image to this post where it shows the pattern of connect & disconnect. I have taken an image using snagit to show the timing pattern. If someone can explain it, I can attach it here. Thanks

    Read the article

  • What the Hekaton?

    - by Tony Davis
    Hekaton, the power behind SQL Server 2014′s In-Memory OLTP technology, is intended to make data operations run orders of magnitude faster on SQL Server. This works its magic partly by serving database workloads entirely from main memory, using memory-optimized table structures. It replaces the relational engine’s standard locking model with an optimistic concurrency model based on time-stamped row versions. Deeper down the Hekaton engine uses new, ‘latch free’ data structures. So far, so good, but performance improvements on this scale require a compromise, and the compromise is that these aren’t tables as we understand them. For the database developer, these differences are painful because they involve sacrificing some very important bits of the relational model. Most importantly, Hekaton tables don’t currently support FOREIGN KEY constraints or CHECK constraints, and you can’t put the checks in triggers because there aren’t any DML triggers either. Constraints allow a relational designer to enforce relational integrity and data integrity. Without them, of course, ‘bad data’ can get into our Hekaton tables. There is no easy way of preventing it. For several classes of database and data, this is a show-stopper. One may regard all these restrictions regretfully, seeing limited opportunity to try out Hekaton with current databases, but perhaps there is also a sudden glow of recognition. Isn’t this how we all originally imagined table variables were going to be, back in SQL 2005? And they have much the same restrictions. Maybe, instead of pretending that a currently-designed database can be ‘Hekatonized’ with a few mouse clicks, we should redesign databases for SQL 2014 to replace table variables with Hekaton tables, exploiting this technology for fast intermediate processing, and for the most part forget, for now, the idea of trying to convert our base relational tables into Hekaton tables. Few database developers would be averse to having their working tables running an order of magnitude faster, as long as it didn’t compromise the integrity of the data in the base tables.

    Read the article

  • Owner of uploads directory is `www-data` but this prevents FTP access via PHP scripts

    - by letseatfood
    To allow write access to Apache, I needed to chown www-data:www-data /var/www/mysite/uploads to my site's upload folder. This allows me to delete files from the folder via unlink() in a PHP script. Unfortunately, this prevents another PHP script, which uses FTP functions, from working. I think it is because the FTP user is mike and now that the uploads directory is owned by www-data, mike cannot access it. I added mike to the group www-data, but this does not fix the issue. Can somebody advise me on how to allow PHP FTP functions to work in addition to file deletion using PHP's unlink() function?

    Read the article

  • Need help configurating my Tomcat server

    - by gablin
    I just reinstalled my entire server, and now I can't seem to get my JSP-based website to work on Tomcat anymore. I use the same server.xml file, which worked perfectly before the reinstallation, but no longer. Here's the content of the server.xml file which worked before: <!--APR library loader. Documentation at /docs/apr.html --> <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" /> <!--Initialize Jasper prior to webapps are loaded. Documentation at /docs/jasper-howto.html --> <Listener className="org.apache.catalina.core.JasperListener" /> <!-- JMX Support for the Tomcat server. Documentation at /docs/non-existent.html --> <Listener className="org.apache.catalina.mbeans.ServerLifecycleListener" /> <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" /> <!-- Global JNDI resources Documentation at /docs/jndi-resources-howto.html --> <GlobalNamingResources> <!-- Editable user database that can also be used by UserDatabaseRealm to authenticate users --> <Resource name="UserDatabase" auth="Container" type="org.apache.catalina.UserDatabase" description="User database that can be updated and saved" factory="org.apache.catalina.users.MemoryUserDatabaseFactory" pathname="conf/tomcat-users.xml" /> </GlobalNamingResources> <!-- A "Service" is a collection of one or more "Connectors" that share a single "Container" Note: A "Service" is not itself a "Container", so you may not define subcomponents such as "Valves" at this level. Documentation at /docs/config/service.html --> <Service name="Catalina"> <!--The connectors can use a shared executor, you can define one or more named thread pools--> <!-- <Executor name="tomcatThreadPool" namePrefix="catalina-exec-" maxThreads="150" minSpareThreads="4"/> --> <!-- A "Connector" represents an endpoint by which requests are received and responses are returned. Documentation at : Java HTTP Connector: /docs/config/http.html (blocking & non-blocking) Java AJP Connector: /docs/config/ajp.html APR (HTTP/AJP) Connector: /docs/apr.html Define a non-SSL HTTP/1.1 Connector on port 8080 --> <Connector port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> <!-- A "Connector" using the shared thread pool--> <!-- <Connector executor="tomcatThreadPool" port="8080" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="8443" /> --> <!-- Define a SSL HTTP/1.1 Connector on port 8443 This connector uses the JSSE configuration, when using APR, the connector should be using the OpenSSL style configuration described in the APR documentation --> <!-- <Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true" maxThreads="150" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" /> --> <!-- Define an AJP 1.3 Connector on port 8009 --> <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" /> <!-- An Engine represents the entry point (within Catalina) that processes every request. The Engine implementation for Tomcat stand alone analyzes the HTTP headers included with the request, and passes them on to the appropriate Host (virtual host). Documentation at /docs/config/engine.html --> <!-- You should set jvmRoute to support load-balancing via AJP ie : <Engine name="Standalone" defaultHost="localhost" jvmRoute="jvm1"> --> <Engine name="Catalina" defaultHost="localhost"> <!--For clustering, please take a look at documentation at: /docs/cluster-howto.html (simple how to) /docs/config/cluster.html (reference documentation) --> <!-- <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/> --> <!-- The request dumper valve dumps useful debugging information about the request and response data received and sent by Tomcat. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.RequestDumperValve"/> --> <!-- This Realm uses the UserDatabase configured in the global JNDI resources under the key "UserDatabase". Any edits that are performed against this UserDatabase are immediately available for use by the Realm. --> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <!-- Define the default virtual host Note: XML Schema validation will not work with Xerces 2.2. --> <!-- <Host name="localhost" appBase="webapps" unpackWARs="true" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> --> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="localhost_access_log." suffix=".txt" pattern="common" resolveHosts="false"/> --> <!-- </Host> --> <Host name="www.rebootradio.nu"> <Alias>rebootradio.nu</Alias> <Context path="" docBase="D:/services/http/rebootradio.nu" debug="1" reloadable="true"/> </Host> </Engine> </Service> </Server> The JSP site doesn't use any WAR files or anything like that; there's just a default.jsp in the specified folder D:/services/http/rebootradio.nu which loads the site. As I said, this configuration worked before, but now with the latest verion of XAMPP and Tomcat it doesn't work anymore. All I get is a 404 message saying The requested resource () is not available.

    Read the article

  • Sendmail encrypted

    - by user1948828
    I manage a website running on Apache. It has public and private areas. When people apply for an account to access the protected portions of the site, they do a TLS/SSL protected POST containing their information which is saved to a (hopefully) nonpublic directory on the server. Then I have a python script which takes URL Encoded POSTS with this user information, sends back a plaintext confirmation to the applicant, encrypts their information with a freeware java command-line utility to protect it (specifically this one: http://spi.dod.mil/ewizard.htm), base64 encodes them, puts them in a file as a mime attachment and uses sendmail to forward the file information to my (and several coworkers' scattered around the country) email account(s) on an Exchange server with Outlook clients. This has worked well for years, but is awkward because it involves manually decrypting the information on a windows box once it is received, using the above mentioned encryption utility. This significantly limits how many can be processed. I would like to be able to encrypt my information in a format that Outlook/Exchange can inherently understand and display so that these emails can be viewed simply by clicking on them. I do have company provided PKI public certs for all the people I need to send to, and am able to send/receive encrypted emails on Outlook manually, but would like to know how I can send to Outlook from apache/linux/python from the command line using the same PKI certs. Dont need to receive them, just send. Is there a utility that can do this? I had thought pgp might but I havent been able to figure it out.

    Read the article

  • Apache mod_proxy parameters

    - by mike
    Hi! I have a machine running Apache with mod_proxy that I'm using to proxy a local Tomcat server running on another port. The problem is that Tomcat does not support wildcard sub-domains(whole reason for using apache/mod_proxy) and our app uses the subdomain to figure out what account the data should come from. So with that said, is there a way to pass the subdomain as a url parameter via mod_proxy? For example, I have this: ProxyPass / http://example.com:8080/ In a virtual host block and I can access the site from any subdomain. Would is be possible to do something like: ProxyPass / http://example.com:8080/?subdomain=the_sub_domain_requested Thanks for any and all help... Mike

    Read the article

  • Avoiding double NAT with PPPoA connection

    - by user498429
    I've got an ASUS RT-N56U wending its way to me and have been thinking about how to set this up on my home network. I currently have a Netgear DG634g V5 and was hoping to use this device as a modem only, with everything else being done by the router. Problem is, my ISP uses PPPoA and the asus seems only to support PPPoE. I'm aware that a double NAT configuration should be avoided and I've seen some instructions here: http://www.tomshardware.co.uk/forum/33700-17-ultimate-modem-router-setup-thread Specifically, I was going to follow the guidance in the section entitled "Chaining Two Networks Together In a Cascading Fashion (Modem handles PPPoA)". That seems like it could work. However, is this a double NAT configuration or even a good way to do it? Would UPnP still work? The other option, I understand, is to buy the Draytek Vigor 120 but I'd ideally like to avoid the cost of that if its not necessary.

    Read the article

  • Keyloggers and Virtualization

    - by paranoid
    Whilst pondering about security, and setting up different VM for certain online activities deemed more risky or requiring extra security (banking, or visiting untrusted websites, etc), I came to think about how such a setup (different VMs for different uses) would defend me against a keylogger. So, two questions then: 1: If a keylogger has been installed inside a VM, can it capture data outside its own VM? 2: The opposite, does a keylogger in a host capture strokes typed within a VM residing in that host? My bet would be No and Yes respectively, but I really have no idea. Anyone else does?

    Read the article

  • Improved Customer Experience, but at what Cost?

    - by Tony Berk
    We can all probably agree that improving your customers' experience is a good thing. But a key question many people are asking is will it help your organization and, in particular, what are the financial benefits?That's a good question, especially when companies ARE experiencing phenomenal return on investment (ROI). Of course, there are many factors that impact ROI or other measures of success, but we'd like to share some success stories as examples of customer experience in action and delivering positive results. If you would like to learn more about the economics of customer experience, see Brian Curran's presentation at the Oracle Customer Experience Summit last month. In this series of blog posts, we'll share actual customer stories. Today's example is Dell, which uses Oracle Real-Time Decisions (RTD) and Siebel CRM as part of their customer experience portfolio to better understand their customers' needs and wants and provide consistent interactions. Regular readers of this blog are probably familiar with Siebel, but RTD may be new to many of you. RTD is a complete decision management solution that delivers real-time decisions and recommendations and automatically renders decisions within a business process to create tailored messaging for every customer interaction.What does that mean? In the video below, Dell describes how customer experience is important not just for one interaction channel, but across all "vehicles." RTD is helping Dell understand customer behavior and communicate with the customer in a more relevant manner, across all communication  or interaction channels including sales and service call centers, email marketing and online. Dell continues to expand use of RTD because the benefits are showing up in sales, service and marketing results including 19% increase in close rates, faster issue resolution and 40% improvement in revenue per click in email marketing. Click here, to learn more about Oracle Customer Experience and stay tuned for more customer spotlights.

    Read the article

  • iOS and Server: OAuth strategy

    - by drekka
    I'm trying to working how to handle authentication when I have iOS clients accessing a Node.js server and want to use services such as Google, Facebook etc to provide basic authentication for my application. My current idea of a typical flow is this: User taps a Facebook/Google button which triggers the OAuth(2) dialogs and authenticates the user on the device. At this point the device has the users access token. This token is saved so that the next time the user uses the app it can be retrieved. The access token is transmitted to my Node.js server which stores it, and tags it as un-verified. The server verifies the token by making a call to Facebook/google for the users email address. If this works the token is flagged as verified and the server knows it has a verified user. If Facebook/google fail to authenticate the token, the server tells iOS client to re-authenticate and present a new token. The iOS client can now access api calls on my Node.js server passing the token each time. As long as the token matches the stored and verified token, the server accepts the call. Obviously the tokens have time limits. I suspect it's possible, but highly unlikely that someone could sniff an access token and attempt to use it within it's lifespan, but other than that I'm hoping this is a reasonably secure method for verification of users on iOS clients without having to roll my own security. Any opinions and advice welcome.

    Read the article

  • customized xkb layouts not working (in KDE?) after upgrade from Ubuntu 9.10 to 10.04

    - by Alan
    I customised my keyboard layout in 9.10 by editing the appropriate /usr/share/X11/xkb/symbols/ file. After upgrading to 10.04 I noticed it had overwritten all my modifications, so I recovered the layout and overwrote the symbol file's base entry. Sadly KDE (and, presumably, the entire OS) seems to ignore the files altogether. The help files don't mention anything about modifying layouts anyway (and the layout switcher seems to be using setxkbmap, which uses the above path according to its man page), so I'm at a bit of a loss. Do I need to compile this into some other format somehow or how do I get it to work?

    Read the article

< Previous Page | 261 262 263 264 265 266 267 268 269 270 271 272  | Next Page >