Search Results

Search found 12900 results on 516 pages for 'rules engine'.

Page 182/516 | < Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >

  • Why won't Remmina connect to Windows 7 Remote Desktop?

    - by rfc1484
    I'm using Ubuntu and I'm trying to connect to another machine in a different network using remote desktop. In Windows7 I have made the following in order to activate remote desktop: I've gone to computer - properties - remote settings I've selected the option: "Allow connections from computers running any version of Remote Desktop I've opened "Windows Firewall with Advanced Security" In inbound rules I've enabled the rules for remote desktop (public and domain) I have also installed Remmina in the Ubuntu machine. For configuring it I did the following steps: Selected the RDP protocol In the server input I have written the Windows machine public IP. In username / password I have typed my login credentials (the same as my Windows admin account) But when I try to connect I get this error message: "Unable to connect to RDP server 89.130.251.160" If I ping my Windows7 machine, I have a correct response. Any suggestions?

    Read the article

  • Strategy to use two different measurement systems in software

    - by Dennis
    I have an application that needs to accept and output values in both US Custom Units and Metric system. Right now the conversion and input and output is a mess. You can only enter in US system, but you can choose the output to be US or Metric, and the code to do the conversions is everywhere. So I want to organize this and put together some simple rules. So I came up with this: Rules user can enter values in either US or Metric, and User Interface will take care of marking this properly All units internally will be stored as US, since the majority of the system already has most of the data stored like that and depends on this. It shouldn't matter I suppose as long as you don't mix unit. All output will be in US or Metric, depending on user selection/choice/preference. In theory this sounds great and seems like a solution. However, one little problem I came across is this: There is some data stored in code or in the database that already returns data like this: 4 x 13/16" screws, which means "four times screws". I need the to be in either US or Metric. Where exactly do I put the conversion code for doing the conversion for this unit? The above already mixing presentation and data, but the data for the field I need to populate is that whole string. I can certainly split it up into the number 4, the 13/16", and the " x " and the " screws", but the question remains... where do I put the conversion code? Different Locations for Conversion Routines 1) Right now the string is in a class where it's produced. I can put conversion code right into that class and it may be a good solution. Except then, I want to be consistent so I will be putting conversion procedures everywhere in the code at-data-source, or right after reading it from the database. The problem though is I think that my code will have to deal with two systems, all throughout the codebase after this, should I do this. 2) According to the rules, my idea was to put it in the view script, aka last change to modify it before it is shown to the user. And it may be the right thing to do, but then it strikes me it may not always be the best solution. (First, it complicates the view script a tad, second, I need to do more work on the data side to split things up more, or do extra parsing, such as in my case above). 3) Another solution is to do this somewhere in the data prep step before the view, aka somewhere in the middle, before the view, but after the data-source. This strikes me as messy and that could be the reason why my codebase is in such a mess right now. It seems that there is no best solution. What do I do?

    Read the article

  • NDepend v4 has just been released!

    - by Vincent Maverick Durano
    Few months ago I blogged about the release of NDepend v3 Continuous Integration and Reporting Capabilities here. Recently, the NDepend team has released v4 which comes with code rules based on C# LINQ queries (CQLinq), this make code ruling so much more powerful and flexible. There are couple of new rules available like: http://www.ndepend.com/DefaultRules/webframe?Q_UI_layer_shouldn't_use_directly_DB_types.html http://www.ndepend.com/DefaultRules/webframe?Q_Types_with_disposable_instance_fields_must_be_disposable.html http://www.ndepend.com/DefaultRules/webframe?Q_Avoid_the_Singleton_pattern.html http://www.ndepend.com/DefaultRules/webframe?Q_Avoid_making_complex_methods_even_more_complex_(Source_CC).html v4 also provides NDepend.API and a dozen of open-source code tool developed with NDepend.API (the Power Tools) http://www.ndepend.com/API/webframe.html

    Read the article

  • Building a Debian package with two buildsystem

    - by queueoverflow
    I have a package that needs to be build with both a regular makefile and a setup.py. The thing is that the Debian packaging magic that is invoked via debuild would recognize a makefile and do the right make make install DESTDIR=??? thing and get it working right. When I only have a setup.py sitting there and have dh $@ --with python3 --buildsystem pybuild in debian/rules, it will correctly install the Python module with python3 setup.py build python3 setup.py install --install-layout deb --root=??? ??? I do not know all those flags. And I think that I do not need to. I just want the makefile magic to happen, and then the setup.py magic. How can I tell debuild to do both? When I do the following in debian/rules %: dh $@ dh $@ --with python3 --buildsystem pybuild it will only put the first one into the resulting package. I tried to delete the debhelper.log between those, but that did not change much.

    Read the article

  • The Obscured C Competition is Back!

    - by TATWORTH
    The Obscure C competition is back at http://www.ioccc.org/ The Competition is open to the 12/Jan/2012. The aim of the competition is: To write the most Obscure/Obfuscated C program under the rules at http://www.ioccc.org/2011/rules.txt. To show the importance of programming style, in an ironic way. To stress C compilers with unusual code. To illustrate some of the subtleties of the C language. To provide a safe forum for poor C code. :-) Even if you are not a C programmer, it is worth looking at some past entries at: http://en.wikipedia.org/wiki/International_Obfuscated_C_Code_Contest

    Read the article

  • FxCop ... Where have you been all my life?

    - by PhilSando
    I was recently introduced to microsoft's tool that analyzes managed code assemblies called FxCop. It points out possible design, localization, performance, and security improvements against a pre defined set of rules (and also accepts custom rules). At first I was unsure how to go about using it as it seems to be aimed at software developers (.exe and .dll) . Its easy to get around this with the following steps: 1)Create a new folder (i.e C:\Code Analysis) 2)Publish your web application into the new folder 3)Open FxCop and add all the dll files from the newly created bin folder  to be scrutinized. Lots more info / docs available here on msdn and you can also download fxcop free

    Read the article

  • ETPM Forms Accelerator

    - by MHundal
    The ETPM Forms Accelerator provides a template that can be used to enter data related to Registration and Tax Forms.  The Forms Accelerator includes a worksheet for each portion related to forms development (Form Type, Form Section, Form Lines and Form Rules).  The Forms Accelerator provides the details that must be defined in ETPM.  This allows for taking an existing form and translating the details of that form into the spreadsheet.  The spreadsheet can then be used to define the details in the system.  In addition, each of the items to be defined is explained it detail - what the field expects and based on the input, how it impacts the field and form definition.   This is a living document - as there is feedback provided, the document will be updated.  The goal of this accelerator is to be an aide in the Forms Development process.  We encourage feedback to help improve the document.  The document is for ETPM 2.3.1.  Implementations using older version of ETPM will find that some of the field definition options may not exist their current system.   The spreadsheet attached contains the following Worksheets: Instructions:  High-level overview for the different worksheets provided. Form Type:  The fields to be populated when defining the Form Type for a Registration or Tax Form Form Section:  The fields to be populated when creating a Form Section.  The number of sections will differ based on the the form being implemented. Form Lines:  The fields to be populated when creating different Form Lines. The number of lines per section will differ based on the form being implemented. Form Rules:  Based on the form, allows for documenting the Form Rules to be configured based on form instructions and Form Lines. Right click on the link and select the "Save Link As" option.  ETPM Forms Accelerator.xls Please provide feedback to [email protected]. You feedback is encouraged and appreciated.  

    Read the article

  • Any legal issue in developing app similar to others?

    - by demotics2002
    There is a game I want to develop for mobile devices e.g. cellphone/tablet. I have been looking for this game and couldn't find it so I decide to just do it on my own. But I'm worried that there will be legal issues. I'm sorry but I do not know what is the process in doing this. I noticed for example the game Bejeweled Blitz. If I develop something similar, do I have to contact the developer and ask for permission if I develop a game with similar rules but use shapes rather than jewels? The original game exists only on Windows for free. If I develop the game, exactly similar rules but different display, am I allowed to sell it? Thanks...

    Read the article

  • Inserting x200s into (ultrabase) docking station mirror screen is always activated leading to non optimal resolution

    - by kiu
    Builtin LCD should be 1440x900 External LCD should be 1920x1080 If X200s is inserted into docking station the option mirror screen is always activated leading to a resolution of 1152x864 which looks terrible on the builtin and external lcd. My manual configuration for docking mode (seperate screens with maximum resolution) should be respected, but "Make Default" button has no consequences. Found a quick fix, but this cant be the offical ubuntu way... /etc/udev/rules.d/99-vga.rules: SUBSYSTEM=="drm", ACTION=="change", RUN+="/usr/local/sbin/vga_changed.sh" /usr/local/sbin/vga_changed.sh: #!/bin/bash dmode="$(cat /sys/class/drm/card0-VGA-1/status)" export DISPLAY=:0.0 if [ "${dmode}" = disconnected ]; then /usr/bin/sudo -u kiu /usr/bin/xrandr --output LVDS1 --mode 1440x900 --pos 0x0 --output VGA1 --off elif [ "${dmode}" = connected ]; then /usr/bin/sudo -u kiu /usr/bin/xrandr --output LVDS1 --mode 1440x900 --pos 0x0 --output VGA1 --auto --mode 1920x1080 --right-of LVDS1 fi

    Read the article

  • How to properly document functionality in an agile project?

    - by RoboShop
    So recently, we've just finished the first phase of our project. We used agile with fortnightly sprints. And whilst the application turned out well, we're now turning our eyes on some of the maintenance tasks. One maintenance task is that all of our documentation appears in the form of specs. These specs describe 1 or more stories and generally are a body of work which a few devs could knock over in a week. For development, that works really well - every two weeks, the devs get handed a spec and it's a nice discrete chunk of work that they can just do. From a documentation point of view, this has become a mess. The problem with writing specs that are focused on delivering just-in-time requirements to developers is we haven't placed much emphasis on the big picture. Specs come from all different angles - it could be describing a standard function, it could describing parts of a workflow, it could be describing a particular screen... And now, we have business rules about our application scattered across 120 documents. Looking for any document for a particular business rule or function in particular is quite hard because you don't know which document has this information, and making a change request is equally hard because once again, we are unsure about which spec to make the change. So we have maybe a couple of weeks of lull before it's back to specing out functionality for the next phase but in this time, I'd like to re-visit our processes. I think the way we have worked so far in terms of delivering fortnightly specs works well. But we also need a way to manage our documentation so that our business rules for a given function / workflow are easy to locate / change. I have two ideas. One is we compile all of our specs into a series of master specs broken by a few broad functional areas. The specs describe the sprint, the master spec describe the system. The only problem I can see is 1) Our existing 120 specs are not all neatly defined into broad functional areas. Some will require breaking up, merging etc. which will take a lot of time. 2) We'll be writing specs and updating master specs in each new sprint. Seems like double the work, and then do the devs look at the spec or the master spec? My other suggestion is to concede that our documentation is too big of a mess, and manage that mess going forward. So we go through each spec, assign like keywords to it, and then when we want to search for a function, we search for that keyword. Problems I can see 1) Still the problem of business rules scattered everywhere, keywords just make it easier to find it. anyway, if anyone has any decent ideas or any experience to share about how best to manage documentation, would really appreciate it.

    Read the article

  • I can't login to facebook from any browser

    - by user92974
    I'm Using UBUNTU 12.04. I'm having problems with Facebook. It really hard to login. Takes like 3 or 5 min. At first I thought it was a flash problem but then I realized it could be some proxy's conf file. Other sites doesn't have problems. Yesterday I tried Tor-browser-bundle and I installed privoxy using this rules http://www.neilvandyke.org/privoxy-rules/ . Today I removed privoxy and the conf file with the Ubuntu software center and ubuntu-tweak. I don't really find the problem and my windows pc does not have any problem with the same modem. I don't have an ISP'S PROXY it just a direct Internet connection using automatic DHCP. Maybe I am missing something else. But I want to be sure so, I'm asking. PD: Sorry for my English I'm Argentinian.

    Read the article

  • Recent EC Meetings - RIM forfeits EC seat

    - by heathervc
    Materials and minutes from the JCP EC Face-to-Face Meeting, held September 2012 in Prague, are now available on the EC Meeting Summaries page.  Topics included JCP.Next, a JCP 2.8 progress report, Inactive JSRs, and two Spec Lead presentations. In October 2011, new EC Standing Rules went into effect. The Rules include the following: "Missing five meetings in a row, or missing more than two-thirds of all meetings in any consecutive twelve-month period, results in loss of EC membership."  Last week, the JCP EC met for their October EC teleconference meeting.  RIM missed this meeting, and has now missed five meetings in a row (see the attendance chart); therefore, RIM has forfeited their EC membership. Results from the 2012 EC Elections will be available on 30 October.  The new merged EC will go into effect on 12 November.

    Read the article

  • Can a domain specific language be used to representing the Open SRD

    - by NeoModulus
    I am in the early stages of creating an open source C# library that would allow developers to drop in the open SRD (http://www.d20srd.org/) into an existing project. Abstracted it is a complex set of tightly coupled business rules. Having previously worked on an adaptive object model project for health care risk management I began with that pattern in mind. Due to the high coupling of rules it is becoming apparent that the project may require some kind of scripting. Have started researching DSL implementation I am now considering scraping the adaptive object model for a domain specific language. I have not work with domain specific languages so my question is it reasonable to assume a domain specific language can be used to representing the open SRD?

    Read the article

  • iptables rule problem

    - by thakrage
    I've been searching around for some time now, but nothing solves my problem. I'm setting up a mail server, but when writing to the iptables, I get an error: iptables-restore: line 2 failed. I'm tryig to use the following /etc/iptables.test.rules: # Allows SMTP access -A INPUT -p tcp --dport 25 -j ACCEPT # Allows pop and pops connections -A INPUT -p tcp --dport 110 -j ACCEPT -A INPUT -p tcp --dport 995 -j ACCEPT # Allows imap and imaps connections -A INPUT -p tcp --dport 143 -j ACCEPT -A INPUT -p tcp --dport 993 -j ACCEPT After this, I'm issuing the following command: sudo iptables-restore < /etc/iptables.test.rules However I get returned this: iptables-restore: line 2 failed. I don't know what the problem is. Can anyone clarify? btw. I'm using Ubuntu 10.10 LTS

    Read the article

  • eth0 missing after upgrading from Hoary to Dapper

    - by Twisol
    I'm trying to upgrade a fairly old server that's been running Hoary for the last five years. I followed the directions on the wiki, but when I restarted after upgrading to Dapper, eth0 disappeared from ifconfig -a. I can see two ethernet adapters in lspci and lshw, and if I put in an Ubuntu 10.10 LiveCD it registers eth0 and eth1 perfectly well. Their MAC addresses also match what's in /etc/iftab. It was working fine before the upgrade, and I have no idea what else I should be trying at this point. The server is entirely cut off from the network right now. EDIT: /etc/udev/rules.d/70-persistent-net.rules doesn't exist, either.

    Read the article

  • Best way to rename existing unique field names in database?

    - by Rajdeep Siddhapura
    I have a database table that contains id, filename, userId id is unique identifier filename should also be unique table may contain 10000 records When a user uploads a file it should be entered in database with given rules: If there is no record with same filename, it should be added as it is (Ex. foobar.pdf) If there is record with same filename, it should be added as uploadedName(2).ext (foobar(2).pdf) If there are n records with same base filename (foobar), it should be added as uploadedName(n+1).ext (foobar(20).pdf) Now if foobar(2).pdf is uploaded, it should be added as foobar(2)(2).pdf & so on This pattern needs to be followed because the file is already being uploaded at client side using ajax before sending the details to server and the file hosting service follows the above rules to name the files. My solution: maintain a file that contains all the names and the number of times it has occurred. if a filename that exists in file is entered, increase occurrence count and new name is generated, else add to it to file if the new name generated is in database, add it to file and generate new name

    Read the article

  • MySQL my.cnf file not being read, Ubuntu 10.04 64bit

    - by reallyordinary
    I've been researching this for a few hours with no luck. Basically it looks like my server's my.cnf file isn't being read at all. I've searched my server, and there's only one my.cnf file on it, located at /etc/mysql/my.cnf. Its ownership is root:root. I'm running Ubuntu 10.04 64bit on a Linode.com server. I have the latest versions of MySQL and PHP installed. I've edited the my.cnf file, commented out "skip-innodb", and have set innodb to be the default storage engine using default-storage-engine = innodb And then restarted mysql. But when I do show engines, MyISAM is still coming up as the default engine. Also - none of the innodb settings I've added to the my.cnf file are being read. For example, I have this in my.cnf: innodb_buffer_pool_size=4G But in phpmyadmin, InnoDB is showing as having a buffer pool size of 8,192 KiB. Similarly, I have this in the my.cnf: innodb_data-file_path = ibdata1:500M:autoextend But in phpmyadmin, it's reading as ibdata1:10M:autoextend. It doesn't look like MyISAM info is being read from the my.cnf file either. The my.cnf file has skip-external-locking queried out, but it's showing as "on" in phpmyadmin. So - yeah, it looks like nothing in the my.cnf file is being read at all. But the server still works. I'm running a Drupal site on it and it seems to operate fine. So mysql seems to be drawing default settings from... some mysterious secret location. Any idea how I can make mysql see and use this my.cnf file? Actually, wait - it looks like it may be being read, not sure. I checked the error.log and found this: 101128 4:28:52 [ERROR] Cannot find or open table databasename/cache_apachesolr from the internal data dictionary of InnoDB though the .frm file for the table exists. Maybe you have deleted and recreated InnoDB data files but have forgotten to delete the corresponding .frm files of InnoDB tables, or you have moved .frm files to another database? or, the table contains indexes that this version of the engine doesn't support. See http://dev.mysql.com/doc/refman/5.1/en/innodb-troubleshooting.html how you can resolve the problem. InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 640 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 32000 pages, max 0 (relevant if non-zero) pages! InnoDB: Could not open or create data files. InnoDB: If you tried to add new data files, and it failed here, InnoDB: you should now edit innodb_data_file_path in my.cnf back InnoDB: to what it was, and remove the new ibdata files InnoDB created InnoDB: in this failed attempt. InnoDB only wrote those files full of InnoDB: zeros, but did not yet use them in any way. But be careful: do not InnoDB: remove old data files which contain your precious data! 101128 4:28:52 [ERROR] Plugin 'InnoDB' init function returned error. 101128 4:28:52 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 101128 4:28:52 [ERROR] /usr/sbin/mysqld: unknown variable 'innodb_lock_wait_timout=50' 101128 4:28:52 [ERROR] Aborting 101128 4:28:52 [Note] /usr/sbin/mysqld: Shutdown complete

    Read the article

  • Combine the Address & Search Bars in Firefox

    - by Asian Angel
    The Search Bar in Firefox is very useful for finding additional information or images while browsing but the UI space it takes up can be frustrating at times. Now you can reclaim that UI space and still have access to all that searching goodness with the Foobar extension. Note: This is about the Foobar Firefox extension and not to be confused with Foobar2000 the open source music player. Before If you have the “Search Bar” displayed there is no doubt that it is taking up valuable space in your browser’s UI. What you need is the ability to reclaim that UI space and still have the same access to your search capability as before…no more sacrificing one for a gain with the other. After As soon as you have installed the extension you can see that the top part of your browser will look much sleeker without the “Search Bar” to clutter it up. The “Search Engine Icon” will now be visible inside of your “Address Bar” as seen here. You will be able to access the same “Search Engine Menu” as before by clicking on the “Search Engine Icon”. There are two display modes for search results (setting available in the “Options”). The first one shown here is “Simple Mode” where all results are in a condensed format. Notice that not only are there search suggestions but also “Bookmarks & History” listings as well. You can literally get the best of both when conducting a search. Note: The number of entries for search suggestions and bookmark/history listings can be adjusted higher or lower in the “Options”. The second one is “Rich Mode” where the results are shown with more details. Choose the “mode” that best suits your personal style. For our first example you can see the results when we conducted a quick search on “Windows 7” (using the first of the three offerings shown from Bing). Our second example was a search for “Flowers” using our Photobucket search engine. Once again nice results opened in a new tab for us. Options The options are easy to go through. It is really nice to be able to choose the number of results that you want displayed and the format that you want them shown in. Note: Changing the “Suggestion popup style” will require a browser restart to take effect. Conclusion If you love using the “Search Bar” in Firefox but want to reclaim the UI space then you will definitely want to add this extension to your browser. The ability to customize the number of results and choose the formatting make this extension even better. Links Download the Foobar extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Combine the Address Bar and Progress Bar Together in FirefoxHide Some or All of the GUI Bars in FirefoxEnable Partial Match AutoComplete in the Firefox Address BarQuick Firefox UI TweaksAdd Search Forms to the Firefox Search Bar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor Add Multiple Tabs to Office Programs The Wearing of the Green – St. Patrick’s Day Theme (Firefox) Perform a Background Check on Yourself

    Read the article

  • Top 10 Linked Blogs of 2010

    - by Bill Graziano
    Each week I send out a SQL Server newsletter and include links to interesting blog posts.  I’ve linked to over 500 blog posts so far in 2010.  Late last year I started storing those links in a database so I could do a little reporting.  I tend to link to posts related to the OLTP engine.  I also try to link to the individual blogger in the group blogs.  Unfortunately that wasn’t possible for the SQLCAT and CSS blogs.  I also have a real weakness for posts related to PASS. These are the top 10 blogs that I linked to during the year ordered by the number of posts I linked to. Paul Randal – Paul writes extensively on the internals of the relational engine.  Lots of great posts around transactions, transaction log, disaster recovery, corruption, indexes and DBCC.  I also linked to many of his SQL Server myths posts. Glenn Berry – Glenn writes very interesting posts on how hardware affects SQL Server.  I especially like his posts on the various CPU platforms.  These aren’t necessarily topics that I’m searching for but I really enjoy reading them. The SQLCAT Team – This Microsoft team focuses on the largest and most interesting SQL Server installations.  The regularly publish white papers and best practices. SQL Server CSS Team – These are the top engineers from the Microsoft Customer Service and Support group.  These are the folks you finally talk to after your case has been escalated about 20 times.  They write about the interesting problems they find. Brent Ozar – The posts I linked to mostly focused on the relational engine: CPU, NUMA, SSD drives, performance monitoring, etc.  But Brent writes about a real variety of topics including blogging, social networking, speaking, the MCM, SQL Azure and anything else that seems to strike his fancy.  His posts are always well written and though provoking. Jeremiah Peschka – A number of Jeremiah’s posts weren’t about SQL Server.  He’s very active in the “NoSQL” area and I linked to a number of those posts.  I think it’s important for people to know what other technologies are out there. Brad McGehee – Brad writes about being a DBA including maintenance plans, DBA checklists, compression and audit. Thomas LaRock – I linked to a variety of posts from PBM to networking to 24 Hours of PASS to TDE.  Just a real variety of topics.  Tom always writes with an interesting style usually mixing in a movie theme and/or bacon. Aaron Bertrand – Many of my links this year were Denali features.  He also had a great series on bad habits to kick. Michael J. Swart – This last one surprised me.  There are some well known SQL Server bloggers below Michael on this list.  I linked to posts on indexes, hierarchies, transactions and I/O performance and a variety of other engine related posts.  All are interesting and well thought out.  Many of his non-SQL posts are also very good.  He seems to have an interest in puzzles and other brain teasers.  Michael, I won’t be surprised again!

    Read the article

  • HOWTO Turn off SPARC T4 or Intel AES-NI crypto acceleration.

    - by darrenm
    Since we released hardware crypto acceleration for SPARC T4 and Intel AES-NI support we have had a common question come up: 'How do I test without the hardware crypto acceleration?'. Initially this came up just for development use so developers can do unit testing on a machine that has hardware offload but still cover the code paths for a machine that doesn't (our integration and release testing would run on all supported types of hardware anyway).  I've also seen it asked in a customer context too so that we can show that there is a performance gain from the hardware crypto acceleration, (not just the fact that SPARC T4 much faster performing processor than T3) and measure what it is for their application. With SPARC T2/T3 we could easily disable the hardware crypto offload by running 'cryptoadm disable provider=n2cp/0'.  We can't do that with SPARC T4 or with Intel AES-NI because in both of those classes of processor the encryption doesn't require a device driver instead it is unprivileged user land callable instructions. Turns out there is away to do this by using features of the Solaris runtime loader (ld.so.1). First I need to expose a little bit of implementation detail about how the Solaris Cryptographic Framework is implemented in Solaris 11.  One of the new Solaris 11 features of the linker/loader is the ability to have a single ELF object that has multiple different implementations of the same functions that are selected at runtime based on the capabilities of the machine.  The alternate to this is having the application coded to call getisax() and make the choice itself.  We use this functionality of the linker/loader when we build the userland libraries for the Solaris Cryptographic Framework (specifically libmd.so, and the unfortunately misnamed due to historical reasons libsoftcrypto.so) The Solaris linker/loader allows control of a lot of its functionality via environment variables, we can use that to control the version of the cryptographic functions we run.  To do this we simply export the LD_HWCAP environment variable with values that tell ld.so.1 to not select the HWCAP section matching certain features even if isainfo says they are present.  For SPARC T4 that would be: export LD_HWCAP="-aes -des -md5 -sha256 -sha512 -mont -mpul" and for Intel systems with AES-NI support: export LD_HWCAP="-aes" This will work for consumers of the Solaris Cryptographic Framework that use the Solaris PKCS#11 libraries or use libmd.so interfaces directly.  It also works for the Oracle DB and Java JCE.  However does not work for the default enabled OpenSSL "t4" or "aes-ni" engines (unfortunately) because they do explicit calls to getisax() themselves rather than using multiple ELF cap sections. However we can still use OpenSSL to demonstrate this by explicitly selecting "pkcs11" engine  using only a single process and thread.  $ openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 54170.81k 187416.00k 489725.70k 805445.63k 1018880.00k $ LD_HWCAP="-aes" openssl speed -engine pkcs11 -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 29376.37k 58328.13k 79031.55k 86738.26k 89191.77k We can clearly see the difference this makes in the case where AES offload to the SPARC T4 was disabled. The "t4" engine is faster than the pkcs11 one because there is less overhead (again on a SPARC T4-1 using only a single process/thread - using -multi you will get even bigger numbers). $ openssl speed -evp aes-128-cbc ... type 16 bytes 64 bytes 256 bytes 1024 bytes 8192 bytes aes-128-cbc 85526.61k 89298.84k 91970.30k 92662.78k 92842.67k Yet another cool feature of the Solaris linker/loader, thanks Rod and Ali. Note these above openssl speed output is not intended to show the actual performance of any particular benchmark just that there is a significant improvement from using hardware acceleration on SPARC T4. For cryptographic performance benchmarks see the http://blogs.oracle.com/BestPerf/ postings.

    Read the article

  • Don’t string together XML

    - by KyleBurns
    XML has been a pervasive tool in software development for over a decade.  It provides a way to communicate data in a manner that is simple to understand and free of platform dependencies.  Also pervasive in software development is what I consider to be the anti-pattern of using string manipulation to create XML.  This usually starts with a “quick and dirty” approach because you need an XML document and looks like (for all of the examples here, we’ll assume we’re writing the body of a method intended to take a Contact object and return an XML string): return string.Format("<Contact><BusinessName>{0}</BusinessName></Contact>", contact.BusinessName);   In the code example, I created (or at least believe I created) an XML document representing a simple contact object in one line of code with very little overhead.  Work’s done, right?  No it’s not.  You see, what I didn’t realize was that this code would be used in the real world instead of my fantasy world where I own all the data and can prevent any of it containing problematic values.  If I use this code to create a contact record for the business “Sanford & Son”, any XML parser will be incapable of processing the data because the ampersand is special in XML and should have been encoded as &amp;. Following the pattern that I have seen many times over, my next step as a developer is going to be to do what any developer in his right mind would do – instruct the user that ampersands are “bad” and they cannot be used without breaking computers.  This may work in many cases and is often accompanied by logic at the UI layer of applications to block these “bad” characters, but sooner or later someone is going to figure out that other applications allow for them and will want the same.  This often leads to the creation of “cleaner” functions that perform a replace on the strings for every special character that the person writing the function can think of.  The cleaner function will usually grow over time as support requests reveal characters that were missed in the initial cut.  Sooner or later you end up writing your own somewhat functional XML engine. I have never been told by anyone paying me to write code that they would like to buy a somewhat functional XML engine.  My employer/customer’s needs have always been for something that may use XML, but ultimately is functionality that drives business value. I’m not going to build an XML engine. So how can I generate XML that is always well-formed without writing my own engine?  Easy – use one of the ones provided to you for free!  If you’re in a shop that still supports VB6 applications, you can use the DomDocument or MXXMLWriter object (of the two I prefer MXXMLWriter, but I’m not going to fully describe either here).  For .Net Framework applications prior to the 3.5 framework, the code is a little more verbose than I would like, but easy once you understand what pieces are required:             using (StringWriter sw = new StringWriter())             {                 using (XmlTextWriter writer = new XmlTextWriter(sw))                 {                     writer.WriteStartDocument();                     writer.WriteStartElement("Contact");                     writer.WriteElementString("BusinessName", contact.BusinessName);                     writer.WriteEndElement(); // end Contact element                     writer.WriteEndDocument();                     writer.Flush();                     return sw.ToString();                 }             }   Looking at that code, it’s easy to understand why people are drawn to the initial one-liner.  Lucky for us, the 3.5 .Net Framework added the System.Xml.Linq.XElement object.  This object takes away a lot of the complexity present in the XmlTextWriter approach and allows us to generate the document as follows: return new XElement("Contact", new XElement("BusinessName", contact.BusinessName)).ToString();   While it is very common for people to use string manipulation to create XML, I’ve discussed here reasons not to use this method and introduced powerful APIs that are built into the .Net Framework as an alternative.  I’ve given a very simplistic example here to highlight the most basic XML generation task.  For more information on the XmlTextWriter and XElement APIs, check out the MSDN library.

    Read the article

  • iptables issue on plesk

    - by Fred Rufin
    i don't know how to open a specific port (rtmp=1935) on my CentOS server using Plesk or itables. I created new rules for port 1935 i/o using Plesk/Modules/Firewall but this doesn't work. Nmap scanning tells me this : 1935/tcp filtered rtmp . So i decided to have look at my iptable using SSH (iptables -L), and iptables seems to contain my rules (tcp spt:macromedia-fcs): Chain INPUT (policy DROP) target prot opt source destination VZ_INPUT all -- anywhere anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED REJECT tcp -- anywhere anywhere tcp flags:!FIN,SYN,RST,ACK/SYN reject-with tcp-reset DROP all -- anywhere anywhere state INVALID ACCEPT all -- anywhere anywhere Chain FORWARD (policy DROP) target prot opt source destination VZ_FORWARD all -- anywhere anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED REJECT tcp -- anywhere anywhere tcp flags:!FIN,SYN,RST,ACK/SYN reject-with tcp-reset DROP all -- anywhere anywhere state INVALID ACCEPT all -- anywhere anywhere Chain OUTPUT (policy DROP) target prot opt source destination VZ_OUTPUT all -- anywhere anywhere ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED REJECT tcp -- anywhere anywhere tcp flags:!FIN,SYN,RST,ACK/SYN reject-with tcp-reset DROP all -- anywhere anywhere state INVALID ACCEPT all -- anywhere anywhere Chain VZ_FORWARD (1 references) target prot opt source destination Chain VZ_INPUT (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:ssh ACCEPT tcp -- anywhere anywhere tcp dpt:smtp ACCEPT tcp -- anywhere anywhere tcp dpt:pop3 ACCEPT tcp -- anywhere anywhere tcp dpt:domain ACCEPT udp -- anywhere anywhere udp dpt:domain ACCEPT tcp -- anywhere anywhere tcp dpts:filenet-tms:65535 ACCEPT udp -- anywhere anywhere udp dpts:filenet-tms:65535 ACCEPT tcp -- anywhere anywhere tcp dpt:cddbp-alt ACCEPT tcp -- anywhere anywhere tcp dpt:pcsync-https ACCEPT tcp -- localhost.localdomain localhost.localdomain ACCEPT tcp -- anywhere anywhere tcp dpt:macromedia-fcs ACCEPT udp -- localhost.localdomain localhost.localdomain Chain VZ_OUTPUT (1 references) target prot opt source destination ACCEPT tcp -- anywhere anywhere tcp spt:http ACCEPT tcp -- anywhere anywhere tcp spt:ssh ACCEPT tcp -- anywhere anywhere tcp spt:smtp ACCEPT tcp -- anywhere anywhere tcp spt:pop3 ACCEPT tcp -- anywhere anywhere tcp spt:domain ACCEPT udp -- anywhere anywhere udp spt:domain ACCEPT tcp -- anywhere anywhere ACCEPT udp -- anywhere anywhere ACCEPT tcp -- anywhere anywhere tcp spt:cddbp-alt ACCEPT tcp -- anywhere anywhere tcp spt:pcsync-https ACCEPT tcp -- localhost.localdomain localhost.localdomain ACCEPT tcp -- anywhere anywhere tcp spt:macromedia-fcs ACCEPT udp -- localhost.localdomain localhost.localdomain My rules seems to be OK but there is no connection to 1935 port using a browser. I can connect to this port with SSH (typing "wget myServerIP:1935") but maybe this is because it is an SSH tunelling ? I don't know how to do.

    Read the article

  • Automount in Ubuntu 9.10

    - by easyrider
    Hi, By default Ubuntu doesn't mount internal NTFS hard drives automatically. A fstab solution not working properly, because of conflicts with the "intelligent" mount system. If I add my hd in fstab and reboot - it will be mounted. But if I go to nautilus, open places panel and click eject button (unmount) and than click on hd again to mount it, I will get an error. In 9.04 to solve this problem you need to modify hal rules in /etc/hal/... preferences.fdi in my case I modified it for only one drive. <device> - <match key="storage.hotpluggable" bool="false"> - <match key="storage.removable" bool="false"> <merge key="storage.automount_enabled_hint" type="bool">false</merge> - <match key="storage.model" string="ST3250310NS"> <merge key="storage.automount_enabled_hint" type="bool">true</merge> </match> </match> </match> </device> But this is not working in 9.10 - devs removed this function from hal to devkit-disk or udev? I don't know. Could you please tell me where automount rules are stored in 9.10? And how to create new rules, and what program controls automount in 9.10?

    Read the article

< Previous Page | 178 179 180 181 182 183 184 185 186 187 188 189  | Next Page >