Search Results

Search found 1047 results on 42 pages for 'restrict'.

Page 34/42 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Restricting Input in HTML Textboxes to Numeric Values

    - by Rick Strahl
    Ok, here’s a fairly basic one – how to force a textbox to accept only numeric input. Somebody asked me this today on a support call so I did a few quick lookups online and found the solutions listed rather unsatisfying. The main problem with most of the examples I could dig up was that they only include numeric values, but that provides a rather lame user experience. You need to still allow basic operational keys for a textbox – navigation keys, backspace and delete, tab/shift tab and the Enter key - to work or else the textbox will feel very different than a standard text box. Yes there are plug-ins that allow masked input easily enough but most are fixed width which is difficult to do with plain number input. So I took a few minutes to write a small reusable plug-in that handles this scenario. Imagine you have a couple of textboxes on a form like this: <div class="containercontent"> <div class="label">Enter a number:</div> <input type="text" name="txtNumber1" id="txtNumber1" value="" class="numberinput" /> <div class="label">Enter a number:</div> <input type="text" name="txtNumber2" id="txtNumber2" value="" class="numberinput" /> </div> and you want to restrict input to numbers. Here’s a small .forceNumeric() jQuery plug-in that does what I like to see in this case: [Updated thanks to Elijah Manor for a couple of small tweaks for additional keys to check for] <script type="text/javascript"> $(document).ready(function () { $(".numberinput").forceNumeric(); }); // forceNumeric() plug-in implementation jQuery.fn.forceNumeric = function () { return this.each(function () { $(this).keydown(function (e) { var key = e.which || e.keyCode; if (!e.shiftKey && !e.altKey && !e.ctrlKey && // numbers key >= 48 && key <= 57 || // Numeric keypad key >= 96 && key <= 105 || // comma, period and minus key == 190 || key == 188 || key == 109 || // Backspace and Tab and Enter key == 8 || key == 9 || key == 13 || // Home and End key == 35 || key == 36 || // left and right arrows key == 37 || key == 39 || // Del and Ins key == 46 || key == 45) return true; return false; }); }); } </script> With the plug-in in place in your page or an external .js file you can now simply use a selector to apply it: $(".numberinput").forceNumeric(); The plug-in basically goes through each selected element and hooks up a keydown() event handler. When a key is pressed the handler is fired and the keyCode of the event object is sent. Recall that jQuery normalizes the JavaScript Event object between browsers. The code basically white-lists a few key codes and rejects all others. It returns true to indicate the keypress is to go through or false to eat the keystroke and not process it which effectively removes it. Simple and low tech, and it works without too much change of typical text box behavior.© Rick Strahl, West Wind Technologies, 2005-2011Posted in JavaScript  jQuery  HTML  

    Read the article

  • PDF Converter Elite Giveaway – Lets you create, convert and edit any type of PDF with ease

    - by Gopinath
    Are you looking for a PDF editing software that lets you create, edit and convert  any type of PDF with ease? Then here is a chance for you to win a lifetime free license of PDF Converter Elite software. Tech Dreams in partnership with pdfconverter.com  brings a giveaway contest exclusively for our readers. Continue reading to know the features of the application and giveaway contest details Adobe Acrobat  is the best software for creating, editing and converting PDF files, but you need spend a lot of money to buy it. PDF Converter Elite, which is priced at $100 has a rich set of features that satisfies most of your PDF management needs. Here is a quick run down of the feature of the application Create PDF files from almost every popular Windows file format – You can create a PDF  from almost 300 popular file formats supported by Windows. Want to convert a word document to PDF? It’s just a click away. How about converting Excels, PowerPoint presentations, text files, images, etc? Yes, with a single click you will be able to turn them to PDF Files. Convert PDF to Word, Excel, PowerPoint, Publisher, HTML – This is one of the best features i liked in this software. You can convert a PDF to any MS Office file format without loosing alignment and quality of the document. The converted documents looks exactly same as your PDF documents and you would be surprised to see near 100% layout replication in the converted document. I feel in love with the perfection at which the files are converted. Edit PDF files easily – You can rework with your PDF documents by inserting watermarks, numbers, headers, footers and more. Also you will be able to merge two PDF files, overlay pages, remove unwanted pages, split a single PDF in to multiple files. Secure PDF files by setting password – You can secure PDF files by limiting how others can use them – set password to open the documents, restrict various activities like printing, copy & paste, screen reading, form filling, etc.. If you are looking for an affordable PDF editing application then PDF Converter Elite is there for you. 10 x PDF Converter Elite Licenses Giveaway Here comes the details on wining a free single user license for our readers – we have 10 PDF Converter Elite single user licenses worth of $100 each. To win a license all you need to do is Like Tech Dreams Fan page on Facebook Tweet or Like this post – buttons are available just below the post heading in the top section of this page Finally drop a comment on how you would like to use PDF Converter Elite We will choose 10 winners through a lucky draw and the licenses will be sent to them in a personal email. Names of the winners will also be announced on Tech Dreams. So are you ready to grab a free copy of PDF Converter worth of $100?

    Read the article

  • Oracle Announces Oracle Insurance Policy Administration for Life and Annuity 9.4

    - by helen.pitts(at)oracle.com
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} Today's global insurers require the ability to provide higher levels of service and quickly bring to market life insurance and annuity products that not only help them stand out from the competition, but also stay current with local legislation. To succeed, they require agile and flexible core systems that enable them to meet the unique localization requirements of the markets in which they operate, whether in North America, Asia Pacific or the Pan-European Region. The release of Oracle Insurance Policy Administration for Life and Annuity 9.4, announced today, helps insurers meet this need with expanded international market capabilities that enable them to reduce risk and profitably compete wherever their business takes them. It offers expanded multi-language along with unit-linked product and fund processing capabilities that enable regional and global insurers to rapidly configure and deliver localized products – along with providing better service for end users through a single policy admin solution. Key enhancements include: Kanji/Kana language support, pre-defined content, and imperial date processing for the Japanese market New localization flexibility for configuring and managing international mailing addresses along with regional variations for client information Enhanced capability to calculate unit-linked pricing and valuation, in addition to market-based processing and pre-configured unit linked content Expanded role-based security and masking capability to further protect sensitive customer data Enhanced capability to restrict processing specified activities based on time of day and user role, reducing exposure to market timing risks Further capability to eliminate duplicate client records, helping to reduce underwriting risks and enhance servicing through a single view of the client "The ability to leverage a single, rules-driven policy administration system for multiple global operation centers can help insurers realize significant improvements in speed to market, customer service, compliance with regional regulations, and consolidation efforts,” noted Celent's Craig Weber, senior vice president, Insurance. “We believe such initiatives are necessary to help the industry address service and distribution imperatives." Helping our customers meet these mission-critical business imperatives is a key objective for Oracle Insurance. Active, ongoing dialogue with our customers is an important part of the process to help understand how our solutions are and can continue to help them achieve success in the marketplace. I had the opportunity to meet with several of our insurance customers at the Oracle Insurance Policy Administration Client Advisory Board meeting last week in Philadelphia, Penn. (View photos on the Oracle Insurance Facebook page.)   It was a great forum for Oracle Insurance and our clients. Discussion centered on the latest business and IT trends, with opportunities to learn more about the latest release of Oracle Insurance Policy Administration for Life and Annuity and other Oracle Insurance solutions such as data warehousing / business intelligence, while exchanging best practices for product innovation and servicing customers and sales channels. Helen Pitts is senior product marketing manager for Oracle Insurance's life and annuities solutions.

    Read the article

  • Implementing Database Settings Using Policy Based Management

    - by Ashish Kumar Mehta
    Introduction Database Administrators have always had a tough time to ensuring that all the SQL Servers administered by them are configured according to the policies and standards of organization. Using SQL Server’s  Policy Based Management feature DBAs can now manage one or more instances of SQL Server 2008 and check for policy compliance issues. In this article we will utilize Policy Based Management (aka Declarative Management Framework or DMF) feature of SQL Server to implement and verify database settings on all production databases. It is best practice to enforce the below settings on each Production database. However, it can be tedious to go through each database and then check whether the below database settings are implemented across databases. In this article I will explain it to you how to utilize the Policy Based Management Feature of SQL Server 2008 to create a policy to verify these settings on all databases and in cases of non-complaince how to bring them back into complaince. Database setting to enforce on each user database : Auto Close and Auto Shrink Properties of database set to False Auto Create Statistics and Auto Update Statistics set to True Compatibility Level of all the user database set as 100 Page Verify set as CHECKSUM Recovery Model of all user database set to Full Restrict Access set as MULTI_USER Configure a Policy to Verify Database Settings 1. Connect to SQL Server 2008 Instance using SQL Server Management Studio 2. In the Object Explorer, Click on Management > Policy Management and you will be able to see Policies, Conditions & Facets as child nodes 3. Right click Policies and then select New Policy…. from the drop down list as shown in the snippet below to open the  Create New Policy Popup window. 4. In the Create New Policy popup window you need to provide the name of the policy as “Implementing and Verify Database Settings for Production Databases” and then click the drop down list under Check Condition. As highlighted in the snippet below click on the New Condition… option to open up the Create New Condition window. 5. In the Create New Condition popup window you need to provide the name of the condition as “Verify and Change Database Settings”. In the Facet drop down list you need to choose the Facet as Database Options as shown in the snippet below. Under Expression you need to select Field value as @AutoClose and then choose Operator value as ‘ = ‘ and finally choose Value as False. Now that you have successfully added the first field you can now go ahead and add rest of the fields as shown in the snippet below. Once you have successfully added all the above shown fields of Database Options Facet, click OK to save the changes and to return to the parent Create New Policy – Implementing and Verify Database Settings for Production Database windows where you will see that the newly created condition “Verify and Change Database Settings” is selected by default. Continues…

    Read the article

  • Proactive Support Sessions at OUG London and OUG Ireland

    - by THE
    .conf td { width: 350px; border: 1px solid black; background-color: #ffcccc; } table { border: 1px solid black; } tr { border: 0px solid black; } td { border: 1px solid black; padding: 5px; } Oracle Proactive Support Technology is proud to announce that two members of its team will be speaking at the UK and Ireland User Group Conferences this year. Maurice and Greg plan to run the following sessions (may be subject to change): Maurice Bauhahn OUG Ireland BI & EPM and Technology Joint SIG Meeting 20 November 2012 BI&EPM SIG event in Ireland (09:00-17:00) and OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 Profit from Oracle Diagnostic Tools Embedded in EPM Oracle bundles in many of its software suites valuable toolsets to collect logs and settings, slice/dice error messages, track performance, and trace activities across services. Become familiar with several enterprise-level diagnostic tools embedded in Enterprise Performance Management (Enterprise Manager Fusion Middleware Control, Remote Diagnostic Agent, Dynamic Monitoring Service, and Oracle Diagnostic Framework). Expedite resolution of Service Requests as you learn to upload output from these tools to My Oracle Support. Who will benefit from attending the session? Geeks will find this most beneficial, but anyone who raises Oracle technical service requests will learn valuable pointers that may speed resolution. The focus is on the EPM stack, but this session will benefit almost everyone who needs to drill deeper into Oracle software environments. What will delegates learn from the session? Delegates who participate in this session will learn: How to access and run Remote Diagnostic Agent, Enterprise Manager Fusion Middleware Control, Dynamic Monitoring Service, and Oracle Diagnostic Framework. How to exploit the strengths of each tool. How to pass the outputs to My Oracle Support. How to restrict exposure of sensitive information. OUG Ireland BI & EPM and Technology Joint SIG Meeting 20 November 2012 BI&EPM SIG event in Ireland (09:00-17:00) and OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 Using EPM-Specific Troubleshooting Tools EPM developers have created a number of EPM-specific tools to collect logs and configuration files, centralize configuration information, and validate a configured installation (Ziplogs, EPM Registry Editor, [Deployment Report, Registry Cleanup Utility, Reset Configuration Tool, EPMSYS Hostname Check] and Validate [EPM System Diagnostic]). Learn how to use these tools on your own or to expedite Service Request resolution. Who will benefit from attending the session? Anyone who monitors Oracle EPM environments or raises service requests will learn valuable lessons that could speed resolution of those requests. Anyone from novices to experts will benefit from this review of custom troubleshooting EPM tools. What will delegates learn from the session? Learn where to locate and start EPM troubleshooting tools created by EPM developers Learn how to collect and upload outputs of EPM troubleshooting tools. Adapt to history of changes in these tools across time and version. Learn how to make critical changes in configurations. Grzegorz Reizer OUG London EPM & Hyperion Conference 2012 Tuesday 23rd to Wednesday 24th Oct 2012 EPM 11.1.2.2: Detailed overview of new features and improvements in Financial Management products. This presentation is a detailed overview of new features and improvements introduced in Enterprise Performance Management 11.1.2.2 for Financial Management products (Hyperion Financial Managment, Hyperion Planning, Financial Close Management). The presentation will cover a number of new product features from recently introduced configurable dimensionality in HFM to new functionality enhancements in Planning. We'll close the session with an overview of upgrade options from earlier product releases.

    Read the article

  • Securing Flexfield Value Sets in EBS 12.2

    - by Sara Woodhull
    Release 12.2 includes a new feature: flexfield value set security. This new feature gives you additional options for ensuring that different administrators have non-overlapping responsibilities, which in turn provides checks and balances for sensitive activities.  Separation of Duties (SoD) is one of the key concepts of internal controls and is a requirement for many regulations including: Sarbanes-Oxley (SOX) Act Health Insurance Portability and Accountability Act (HIPAA) European Union Data Protection Directive. Its primary intent is to put barriers in place to prevent fraud or theft by an individual acting alone. Implementing Separation of Duties requires minimizing the possibility that users could modify data across application functions where the users should not normally have access. For flexfields and report parameters in Oracle E-Business Suite, values in value sets can affect functionality such as the rollup of accounting data, job grades used at a company, and so on. Controlling access to the creation or modification of value set values can be an important piece of implementing Separation of Duties in an organization. New Flexfield Value Set Security feature Flexfield value set security allows system administrators to restrict users from viewing, adding or updating values in specific value sets. Value set security enables role-based separation of duties for key flexfields, descriptive flexfields, and report parameters. For example, you can set up value set security such that certain users can view or insert values for any value set used by the Accounting Flexfield but no other value sets, while other users can view and update values for value sets used for any flexfields in Oracle HRMS. You can also segregate access by Operating Unit as well as by role or responsibility.Value set security uses a combination of data security and role-based access control in Oracle User Management. Flexfield value set security provides a level of security that is different from the previously-existing and similarly-named features in Oracle E-Business Suite: Function security controls whether a user has access to a specific page or form, as well as what operations the user can do in that screen. Flexfield value security controls what values a user can enter into a flexfield segment or report parameter (by responsibility) during routine data entry in many transaction screens across Oracle E-Business Suite. Flexfield value set security (this feature, new in Release 12.2) controls who can view, insert, or update values for a particular value set (by flexfield, report, or value set) in the Segment Values form (FNDFFMSV). The effect of flexfield value set security is that a user of the Segment Values form will only be able to view those value sets for which the user has been granted access. Further, the user will be able to insert or update/disable values in that value set if the user has been granted privileges to do so.  Flexfield value set security affects independent, dependent, and certain table-validated value sets for flexfields and report parameters. Initial State of the Feature upon Upgrade Because this is a new security feature, it is turned on by default.  When you initially install or upgrade to Release 12.2.2, no users are allowed to view, insert or update any value set values (users may even think that their values are missing or invalid because they cannot see the values).  You must explicitly set up access for specific users by enabling appropriate grants and roles for those users.We recommend using flexfield value set security as part of a comprehensive Separation of Duties strategy. However, if you choose not to implement flexfield value set security upon upgrading to or installing Release 12.2, you can enable backwards compatibility--users can access any value sets if they have access to the Values form--after you upgrade. The feature does not affect day-to-day transactions that use flexfields.  However, you must either set up specific grants and roles or enable backwards compatibility before users can create new values or update or disable existing values. For more information, see: Release 12.2 Flexfield Value Set Security Documentation Update for Patch 17305947:R12.FND.C (Document 1589204.1) R12.2 TOI: Implement and Use Application Object Library (AOL) - Flexfields Security and Separation of Duties for Value Sets (recorded training)

    Read the article

  • Functional/nonfunctional requirements VS design ideas

    - by Nicholas Chow
    Problem domain Functional requirements defines what a system does. Non-Functional requirements defines quality attributes of what the system does as a whole.(performance, security, reliability, volume, useability, etc.) Constraints limits the design space, they restrict designers to certain types of solutions. Solution domain Design ideas , defines how the system does it. For example a stakeholder need might be we want to increase our sales, therefore we must improve the usability of our webshop so more customers will purchase, a requirement can be written for this. (problem domain) Design takes this further into the solution domain by saying "therefore we want to offer credit card payments in addition to the current prepayment option". My problem is that the transition phase from requirement to design seems really vague, therefore when writing requirements I am often confused whether or not I incorporated design ideas in my requirements, that would make my requirement wrong. Another problem is that I often write functional requirements as what a system does, and then I also specify in what timeframe it must be done. But is this correct? Is it then a still a functional requirement or a non functional one? Is it better to seperate it into two distinct requirements? Here are a few requirements I wrote: FR1 Registration of Organizer FR1 describes the registration of an Organizer on CrowdFundum FR1.1 The system shall display a registration form on the website. FR1.2 The system shall require a Name, Username, Document number passport/ID card, Address, Zip code, City, Email address, Telephone number, Bank account, Captcha code on the registration form when a user registers. FR1.4 The system shall display an error message containing: “Registration could not be completed” to the subscriber within 1 seconds after the system check of the registration form was unsuccessful. FR1.5 The system shall send a verification email containing a verification link to the subscriber within 30 seconds after the system check of the registration form was successful. FR1.6 The system shall add the newly registered Organizer to the user base within 5 seconds after the verification link was accessed. FR2 Organizer submits a Project FR2 describes the submission of a Project by an Organizer on CrowdFundum - FR2 The system shall display a submit Project form to the Organizer accounts on the website.< - FR2.3 The system shall check for completeness the Name of the Project, 1-3 Photo’s, Keywords of the Project, Punch line, Minimum and maximum amount of people, Funding threshold, One or more reward tiers, Schedule of when what will be organized, Budget plan, 300-800 Words of additional information about the Project, Contact details within 1 secondin after an Organizer submits the submit Project form. - FR2.8 The system shall add to the homepage in the new Projects category the Project link within 30 seconds after the system made a Project webpage - FR2.9 The system shall include in the Project link for the homepage : Name of the Project, 1 Photo, Punch line within 30 seconds after the system made a Project webpage. Questions: FR 1.1 : Have I incorporated a design idea here, would " the system shall have a registration form" be a better functional requirement? F1.2 ,2.3 : Is this not singular? Would the conditions be better written for each its own separate requirement FR 1.4: Is this a design idea? Is this a correct functional requirement or have I incorporated non functional(performance) in it? Would it be better if I written it like this: FR1 The system shall display an error message when check is unsuccessful. NFR: The system will respond to unsuccesful registration form checks within 1 seconds. Same question with FR 2.8 and 2.9. FR2.3: The system shall check for "completeness", is completeness here used ambigiously? Should I rephrase it? FR1.2: I added that the system shall require a "Captcha code" is this a functional requirement or does it belong to the "security aspect" of a non functional requirement. I am eagerly waiting for your response. Thanks!

    Read the article

  • Future Of F# At Jazoon 2011

    - by Alois Kraus
    I was at the Jazoon 2011 in Zurich (Switzerland). It was a really cool event and it had many top notch speaker not only from the Microsoft universe. One of the most interesting talks was from Don Syme with the title: F# Today/F# Tomorrow. He did show how to use F# scripting to browse through open databases/, OData Web Services, Sharepoint, …interactively. It looked really easy with the help of F# Type Providers which is the next big language feature in a future F# version. The object returned by a Type Provider is used to access the data like in usual strongly typed object model. No guessing how the property of an object is called. Intellisense will show it just as you expect. There exists a range of Type Providers for various data sources where the schema of the stored data can somehow be dynamically extracted. Lets use e.g. a free database it would be then let data = DbProvider(http://.....); data the object which contains all data from e.g. a chemical database. It has an elements collection which contains an element which has the properties: Name, AtomicMass, Picture, …. You can browse the object returned by the Type Provider with full Intellisense because the returned object is strongly typed which makes this happen. The same can be achieved of course with code generators that use an input the schema of the input data (OData Web Service, database, Sharepoint, JSON serialized data, …) and spit out the necessary strongly typed objects as an assembly. This does work but has the downside that if the schema of your data source is huge you will quickly run against a wall with traditional code generators since the generated “deserialization” assembly could easily become several hundred MB. *** The following part contains guessing how this exactly work by asking Don two questions **** Q: Can I use Type Providers within C#? D: No. Q: F# is after all a library. I can reference the F# assemblies and use the contained Type Providers? D: F# does annotate the generated types in a special way at runtime which is not a static type that C# could use. The F# type providers seem to use a hybrid approach. At compilation time the Type Provider is instantiated with the url of your input data. The obtained schema information is used by the compiler to generate static types as usual but only for a small subset (the top level classes up to certain nesting level would make sense to me). To make this work you need to access the actual data source at compile time which could be a problem if you want to keep the actual url in a config file. Ok so this explains why it does work at all. But in the demo we did see full intellisense support down to the deepest object level. It looks like if you navigate deeper into the object hierarchy the type provider is instantiated in the background and attach to a true static type the properties determined at run time while you were typing. So this type is not really static at all. It is static if you define as a static type that its properties shows up in intellisense. But since this type information is determined while you are typing and it is not used to generate a true static type and you cannot use these “intellistatic” types from C#. Nonetheless this is a very cool language feature. With the plotting libraries you can generate expressive charts from any datasource within seconds to get quickly an overview of any structured data storage. My favorite programming language C# will not get such features in the near future there is hope. If you restrict yourself to OData sources you can use LINQPad to query any OData enabled data source with LINQ with ease. There you can query Stackoverflow with The output is also nicely rendered which makes it a very good tool to explore OData sources today.

    Read the article

  • Starting over and new to Ubuntu

    - by 2funnyyone
    We have been having repeated problems with our interent service and using windows xp & sp3 (users and premissions) I see no need for them. I started with computers long before windows. Every since sp 3 come out in 2009 I have had nothing but problems. I have lost so many computers to virius and trojans, we just stack them up. We are with Qwest/ Century link which is using advertising servers which I think is causing the problem. All the computers are networked together which is not how I set them up. I beleive Century link is networking them through assignment of a domain for our home. This causes all the computers to crash twice. This is getting expensive. We tried buying new harddrives but reinfect with hours of connecting to internet. I also beleive the modem, router and all computers are infected. I put combofix on this one and that is the only reason we are still online with this laptop. I am afraid to install new equipment because my partner and I are on SSDI and this cost a lot. I go to school at UOP and had to run off a flash and reboot this laptop to recovery every other day or so, this pass month. New plan is: We are getting ready to install new equipment but afraid to reinfect again. Need help to install new equipment. The plan is to use current internet services from Qwest/ now Century Link. The list of New equipment in order: Century link wireless modem is ZyXEL PK5000Z with 4 direct connect Ethernet ports Next Dell Optiplex 210L ( used auction purchase ) 2 gb ram 80 g hard drive Ubuntu 11.10 operating system Next Wireless D-Link router WBR-1310 with 4 direct connect Ethernet ports OK-------- Purchased Dell OEM disk for Repair or Reinstalling Windows XP Professional Operating system (2 roommates as well) All infected computers are Dell desktops or laptops with XP Pro Also purchasing Ubuntu 12.04 for 3 computers. We like the way it runs but still learning it. Questions 1] How do we fdisk the infected computers without infecting new system. We have Dos disks, but none have floppy dish drive. We do have a new floppy disk drive and usb adapter we purchased from Amazon. 2] We are thinking Avast internet security because of the boot scan. We want all software loaded before reconnecting. We can manually load our internet provider information. We purchased StopZilla $100 for 5 computers, but not sure that is what we need. But need how to setup ports security and services we will need. Really lost at this part. So we are safe when we go back on the internet. 3] Want to connect reloaded fdisk systems to router as public connection and no sharing. Do not want to network all computers. 4] Want parental/ ownership control from Ubuntu system for internet connection (Children and friends). Do we restrict at the modem and/ or router? Any help would be a blessing. I do not want to go alone on this anymore.

    Read the article

  • Find Knowledge Quickly

    - by Get Proactive Customer Adoption Team
    Untitled Document Get to relevant knowledge on the Oracle products you use in a few quick steps! Customers tell us that the volume of search results returned can make it difficult to find the information they need, especially when similar Oracle products exist. These simple tips show you how to filter, browse, search, and refine your results to get relevant answers faster. Filter first: PowerView is your best friend Powerview is an often ignored feature of My Oracle Support that enables you to control the information displayed on the Dashboard, the Knowledge tab and regions, and the Service Request tab based on one or more parameters. You can define a PowerView to limit information based on product, product line, support ID, platform, hostname, system name and others. Using PowerView allows you to restrict: Your search results to the filters you have set The product list when selecting your products in Search & Browse and when creating service requests   The PowerView menu is at the top of My Oracle Support, near the title You turn PowerView on by clicking PowerView is Off, which is a button. When PowerView is On, and filters are active, clicking the button again will toggle Powerview off. Click the arrow to the right to create new filters, edit filters, remove a filter, or choose from the list of previously created filters. You can create a PowerView in 3 simple steps! Turn PowerView on and select New from the PowerView menu. Select your filter from the Select Filter Type dropdown list and make selections from the other two menus. Hint: While there are many filter options, selecting your product line or your list of products will provide you with an effective filter. Click the plus sign (+) to add more filters. Click the minus sign (-) to remove a filter. Click Create to save and activated the filter(s) You’ll notice that PowerView is On displays along with the active filters. For more information about the PowerView capabilities, click the Learn more about PowerView… menu item or view a short video. Browse & Refine: Access the Best Match Fast For Your Product and Task In the Knowledge Browse region of the Knowledge or Dashboard tabs, pick your product, pick your task, select a version, if applicable. A best match document – a collection of knowledge articles and resources specific to your selections - may display, offering you a one-stop shop. The best match document, called an “information center,” is an aggregate of dynamically updated links to information pertinent to the product, task, and version (if applicable) you chose. These documents are refreshed every 24 hours to ensure that you have the most current information at your fingertips. Note: Not all products have “information centers.” If no information center appears as a best match, click Search to see a list of search results. From the information center, you can access topics from a product overview to security information, as shown in the left menu. Just want to search? That’s easy too! Again, pick your product, pick your task, select a version, if applicable, enter a keyword term, and click Search. Hint: In this example, you’ll notice that PowerView is on and set to PeopleSoft Enterprise. When PowerView is on and you select a product from the Knowledge Base product list, the listed products are limited to the active PowerView filter. (Products you’ve previously picked are also listed at the top of the dropdown list.) Your search results are displayed based on the parameters you entered. It’s that simple! Related Information: My Oracle Support - User Resource Center [ID 873313.1] My Oracle Support Community For more tips on using My Oracle Support, check out these short video training modules. My Oracle Support Speed Video Training [ID 603505.1]

    Read the article

  • Non-blocking I/O using Servlet 3.1: Scalable applications using Java EE 7 (TOTD #188)

    - by arungupta
    Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. In a typical application, ServletInputStream is read in a while loop. public class TestServlet extends HttpServlet {    protected void doGet(HttpServletRequest request, HttpServletResponse response)         throws IOException, ServletException {     ServletInputStream input = request.getInputStream();       byte[] b = new byte[1024];       int len = -1;       while ((len = input.read(b)) != -1) {          . . .        }   }} If the incoming data is blocking or streamed slower than the server can read then the server thread is waiting for that data. The same can happen if the data is written to ServletOutputStream. This is resolved in Servet 3.1 (JSR 340, to be released as part Java EE 7) by adding event listeners - ReadListener and WriteListener interfaces. These are then registered using ServletInputStream.setReadListener and ServletOutputStream.setWriteListener. The listeners have callback methods that are invoked when the content is available to be read or can be written without blocking. The updated doGet in our case will look like: AsyncContext context = request.startAsync();ServletInputStream input = request.getInputStream();input.setReadListener(new MyReadListener(input, context)); Invoking setXXXListener methods indicate that non-blocking I/O is used instead of the traditional I/O. At most one ReadListener can be registered on ServletIntputStream and similarly at most one WriteListener can be registered on ServletOutputStream. ServletInputStream.isReady and ServletInputStream.isFinished are new methods to check the status of non-blocking I/O read. ServletOutputStream.canWrite is a new method to check if data can be written without blocking.  MyReadListener implementation looks like: @Overridepublic void onDataAvailable() { try { StringBuilder sb = new StringBuilder(); int len = -1; byte b[] = new byte[1024]; while (input.isReady() && (len = input.read(b)) != -1) { String data = new String(b, 0, len); System.out.println("--> " + data); } } catch (IOException ex) { Logger.getLogger(MyReadListener.class.getName()).log(Level.SEVERE, null, ex); }}@Overridepublic void onAllDataRead() { System.out.println("onAllDataRead"); context.complete();}@Overridepublic void onError(Throwable t) { t.printStackTrace(); context.complete();} This implementation has three callbacks: onDataAvailable callback method is called whenever data can be read without blocking onAllDataRead callback method is invoked data for the current request is completely read. onError callback is invoked if there is an error processing the request. Notice, context.complete() is called in onAllDataRead and onError to signal the completion of data read. For now, the first chunk of available data need to be read in the doGet or service method of the Servlet. Rest of the data can be read in a non-blocking way using ReadListener after that. This is going to get cleaned up where all data read can happen in ReadListener only. The sample explained above can be downloaded from here and works with GlassFish 4.0 build 64 and onwards. The slides and a complete re-run of What's new in Servlet 3.1: An Overview session at JavaOne is available here. Here are some more references for you: Java EE 7 Specification Status Servlet Specification Project JSR Expert Group Discussion Archive Servlet 3.1 Javadocs

    Read the article

  • Pro/con of using Angular directives for complex form validation/ GUI manipulation

    - by tengen
    I am building a new SPA front end to replace an existing enterprise's legacy hodgepodge of systems that are outdated and in need of updating. I am new to angular, and wanted to see if the community could give me some perspective. I'll state my problem, and then ask my question. I have to generate several series of check boxes based on data from a .js include, with data like this: $scope.fieldMappings.investmentObjectiveMap = [ {'id':"CAPITAL PRESERVATION", 'name':"Capital Preservation"}, {'id':"STABLE", 'name':"Moderate"}, {'id':"BALANCED", 'name':"Moderate Growth"}, // etc {'id':"NONE", 'name':"None"} ]; The checkboxes are created using an ng-repeat, like this: <div ng-repeat="investmentObjective in fieldMappings.investmentObjectiveMap"> ... </div> However, I needed the values represented by the checkboxes to map to a different model (not just 2-way-bound to the fieldmappings object). To accomplish this, I created a directive, which accepts a destination array destarray which is eventually mapped to the model. I also know I need to handle some very specific gui controls, such as unchecking "None" if anything else gets checked, or checking "None" if everything else gets unchecked. Also, "None" won't be an option in every group of checkboxes, so the directive needs to be generic enough to accept a validation function that can fiddle with the checked state of the checkbox group's inputs based on what's already clicked, but smart enough not to break if there is no option called "NONE". I started to do that by adding an ng-click which invoked a function in the controller, but in looking around Stack Overflow, I read people saying that its bad to put DOM manipulation code inside your controller - it should go in directives. So do I need another directive? So far: (html): <input my-checkbox-group type="checkbox" fieldobj="investmentObjective" ng-click="validationfunc()" validationfunc="clearOnNone()" destarray="investor.investmentObjective" /> Directive code: .directive("myCheckboxGroup", function () { return { restrict: "A", scope: { destarray: "=", // the source of all the checkbox values fieldobj: "=", // the array the values came from validationfunc: "&" // the function to be called for validation (optional) }, link: function (scope, elem, attrs) { if (scope.destarray.indexOf(scope.fieldobj.id) !== -1) { elem[0].checked = true; } elem.bind('click', function () { var index = scope.destarray.indexOf(scope.fieldobj.id); if (elem[0].checked) { if (index === -1) { scope.destarray.push(scope.fieldobj.id); } } else { if (index !== -1) { scope.destarray.splice(index, 1); } } }); } }; }) .js controller snippet: .controller( 'SuitabilityCtrl', ['$scope', function ( $scope ) { $scope.clearOnNone = function() { // naughty jQuery DOM manipulation code that // looks at checkboxes and checks/unchecks as needed }; The above code is done and works fine, except the naughty jquery code in clearOnNone(), which is why I wrote this question. And here is my question: after ALL this, I think to myself - I could be done already if I just manually handled all this GUI logic and validation junk with jQuery written in my controller. At what point does it become foolish to write these complicated directives that future developers will have to puzzle over more than if I had just written jQuery code that 99% of us would understand with a glance? How do other developers draw the line? I see this all over Stack Overflow. For example, this question seems like it could be answered with a dozen lines of straightforward jQuery, yet he has opted to do it the angular way, with a directive and a partial... it seems like a lot of work for a simple problem. Specifically, I suppose I would like to know: how SHOULD I be writing the code that checks whether "None" has been selected (if it exists as an option in this group of checkboxes), and then check/uncheck the other boxes accordingly? A more complex directive? I can't believe I'm the only developer that is having to implement code that is more complex than needed just to satisfy an opinionated framework.

    Read the article

  • Bowing to User Experience

    As a consumer of geeky news it is hard to check my Google Reader without running into two or three posts about Apples iPad and in particular the changes to the developer guidelines which seemingly restrict developers to using Apples Xcode tool and Objective-C language for iPad apps. One of the alternatives to Objective-C affected, is MonoTouch, an option with some appeal to me as it is based on the Mono implementation of C#. Seemingly restricted is the key word here, as far as I can tell, no official announcement has been made about its fate. For more details around MonoTouch for iPhone OS, check out Miguel de Icazas post: http://tirania.org/blog/archive/2010/Apr-28.html. These restrictions have provoked some outrage as the perception is that Apple is arrogantly restricting developers freedom to create applications as they choose and perhaps unwittingly shortchanging iPhone/iPad users who wont benefit from these now never-to-be-made great applications. Apples response has mostly been to say they are concentrating on providing a certain user experience to their customers, and to do this, they insist everyone uses the tools they approve. Which isnt a surprising line of reasoning given Apple restricts the hardware used and content of the apps already. The vogue term for this approach is curated, as in a benevolent museum director selecting only the finest artifacts for display or a wise gardener arranging the plants in a garden just so. If this is what a curated experience is like it is hard to argue that consumers are not responding. My iPhone is probably the most satisfying piece of technology I own. Coming from the Razr, it really was an revolution in how the form factor, interface and user experience all tied together. While the curated approach reinvented the smart phone genre, it is easy to forget that this is not a new approach for Apple. Macbooks and Macs are Apple hardware that run Apple software. And theyve been successful, but not quite in the same way as the iPhone or iPad (based on early indications). Why not? Well a curated approach can only be wildly successful if the curator a) makes the right choices and b) offers choices that no one else has. Although its advantages are eroding, the iPhone was different from other phones, a unique, focused, touch-centric experience. The iPad is an attempt to define another category of computing. Macs and Macbooks are great devices, but are not fundamentally a different user experience than a PC, you still have windows, file folders, mouse and keyboard, and similar applications. So the big question for Apple is can they hold on to their market advantage, continuing innovating in user experience and stay on top? Or are they going be like Xerox, and the rest of the world says thank you for the windows metaphor, now let me implement that better? It will be exciting to watch, with Android already a viable competitor and Microsoft readying Windows Phone 7. And to close the loop back to the restrictions on developing for iPhone OS. At this point the main target appears to be Adobe and Adobe Flash. Apples calculation is that a) they dont need those developers or b) the developers they want will learn Apples stuff anyway. My guess is that they are correct; that as much as I like the idea of developers having more options, I am not going to buy a competitors product to spite Apple unless that product is just as usable. For a non-technical consumer, I dont know that this conversation even factors into the buying decision. If it did, wed be talking about how Microsoft is trying to retake a slice of market share from the behemoth that is Linux.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • What's new in Servlet 3.1 ? - Java EE 7 moving forward

    - by arungupta
    Servlet 3.0 was released as part of Java EE 6 and made huge changes focused at ease-of-use. The idea was to leverage the latest language features such as annotations and generics and modernize how Servlets can be written. The web.xml was made as optional as possible. Servet 3.1 (JSR 340), scheduled to be part of Java EE 7, is an incremental release focusing on couple of key features and some clarifications in the specification. The main features of Servlet 3.1 are explained below: Non-blocking I/O - Servlet 3.0 allowed asynchronous request processing but only traditional I/O was permitted. This can restrict scalability of your applications. Non-blocking I/O allow to build scalable applications. TOTD #188 provide more details about how non-blocking I/O can be done using Servlet 3.1. HTTP protocol upgrade mechanism - Section 14.42 in the HTTP 1.1 specification (RFC 2616) defines an upgrade mechanism that allows to transition from HTTP 1.1 to some other, incompatible protocol. The capabilities and nature of the application-layer communication after the protocol change is entirely dependent upon the new protocol chosen. After an upgrade is negotiated between the client and the server, the subsequent requests use the new chosen protocol for message exchanges. A typical example is how WebSocket protocol is upgraded from HTTP as described in Opening Handshake section of RFC 6455. The decision to upgrade is made in Servlet.service method. This is achieved by adding a new method: HttpServletRequest.upgrade and two new interfaces: javax.servlet.http.HttpUpgradeHandler and javax.servlet.http.WebConnection. TyrusHttpUpgradeHandler shows how WebSocket protocol upgrade is done in Tyrus (Reference Implementation for Java API for WebSocket). Security enhancements Applying run-as security roles to #init and #destroy methods Session fixation attack by adding HttpServletRequest.changeSessionId and a new interface HttpSessionIdListener. You can listen for any session id changes using these methods. Default security semantic for non-specified HTTP method in <security-constraint> Clarifying the semantics if a parameter is specified in the URI and payload Miscellaneous ServletResponse.reset clears any data that exists in the buffer as well as the status code, headers. In addition, Servlet 3.1 will also clears the state of calling getServletOutputStream or getWriter. ServletResponse.setCharacterEncoding: Sets the character encoding (MIME charset) of the response being sent to the client, for example, to UTF-8. Relative protocol URL can be specified in HttpServletResponse.sendRedirect. This will allow a URL to be specified without a scheme. That means instead of specifying "http://anotherhost.com/foo/bar.jsp" as a redirect address, "//anotherhost.com/foo/bar.jsp" can be specified. In this case the scheme of the corresponding request will be used. Clarification in HttpServletRequest.getPart and .getParts without multipart configuration. Clarification that ServletContainerInitializer is independent of metadata-complete and is instantiated per web application. A complete replay of What's New in Servlet 3.1: An Overview from JavaOne 2012 can be seen here (click on CON6793_mp4_6793_001 in Media). Each feature will be added to the JSR subject to EG approval. You can share your feedback to [email protected]. Here are some more references for you: Servlet 3.1 Public Review Candidate Downloads Servlet 3.1 PR Candidate Spec Servlet 3.1 PR Candidate Javadocs Servlet Specification Project JSR Expert Group Discussion Archive Java EE 7 Specification Status Several features have already been integrated in GlassFish 4 Promoted Builds. Have you tried any of them ? Here are some other Java EE 7 primers published so far: Concurrency Utilities for Java EE (JSR 236) Collaborative Whiteboard using WebSocket in GlassFish 4 (TOTD #189) Non-blocking I/O using Servlet 3.1 (TOTD #188) What's New in EJB 3.2 ? JPA 2.1 Schema Generation (TOTD #187) WebSocket Applications using Java (JSR 356) Jersey 2 in GlassFish 4 (TOTD #182) WebSocket and Java EE 7 (TOTD #181) Java API for JSON Processing (JSR 353) JMS 2.0 Early Draft (JSR 343) And of course, more on their way! Do you want to see any particular one first ?

    Read the article

  • Write-only collections in MongoDB

    - by rcoder
    I'm currently using MongoDB to record application logs, and while I'm quite happy with both the performance and with being able to dump arbitrary structured data into log records, I'm troubled by the mutability of log records once stored. In a traditional database, I would structure the grants for my log tables such that the application user had INSERT and SELECT privileges, but not UPDATE or DELETE. Similarly, in CouchDB, I could write a update validator function that rejected all attempts to modify an existing document. However, I've been unable to find a way to restrict operations on a MongoDB database or collection beyond the three access levels (no access, read-only, "god mode") documented in the security topic on the MongoDB wiki. Has anyone else deployed MongoDB as a document store in a setting where immutability (or at least change tracking) for documents was a requirement? What tricks or techniques did you use to ensure that poorly-written or malicious application code could not modify or destroy existing log records? Do I need to wrap my MongoDB logging in a service layer that enforces the write-only policy, or can I use some combination of configuration, query hacking, and replication to ensure a consistent, audit-able record is maintained?

    Read the article

  • How to allow to allow admins to edit my app's config files without UAC elevation?

    - by Justin Grant
    My company produces a cross-platform server application which loads its configuration from user-editable configuration files. On Windows, config file ACLs are locked down by our Setup program to allow reading by all users but restrict editing to Administrators and Local System only. Unfortunately, on Windows Server 2008, even local administrators no longer have admin privileges (because of UAC) unless they're running an elevated app. This has caused complaints from users who cannot use their favorite text editor to open and save config files changes-- they can open the files (since anyone can read) but can't save. Anyone have recommendations for what we can do (if anything) in our app's Setup to make editing easier for admins on Windows Server 2008? Related questions: if a Windows Server 2008 admin wants to edit an admins-only config file, how does he normally do it? Is he forced to use a text editor which is smart enough to auto-elevate when elevation is needed, like Windows Explorer does in response to access denied errors? Does he launch the editor from an elevated command-prompt window? Something else?

    Read the article

  • set iPhone Required device capabilities

    - by chozinul
    My application is for "iPhone ONLY", But I have made a mistake in the first place by not setting the application for specific device. The current setting now is: Requirement: Compatible for iPhone, iPod touch and iPad. I would like to set it to: Requirement: Compatible for iPhone. I have go to some tutorial and forum. and nothing can solve my problem. I have also contact the apple support and this is what I got: "You are permitted to expand your device requirements only. Submitting an update to your binary to restrict your device requirements is not permitted. " is it mean that I can not set my application for iPhone only? If i can change it, what would you recommend me to do? I have change the setting of device capability inside the plist, to one of this. which one is correct, use "UIRequiredDeviceCapabilities" or "Required device capabilities"? i provide the screenshoot in here: http://img218.imageshack.us/gal.php?g=screenshot20100604atam1.png does the way I use it correct? Thanks.

    Read the article

  • How can I write a clean Repository without exposing IQueryable to the rest of my application?

    - by Simucal
    So, I've read all the Q&A's here on SO regarding the subject of whether or not to expose IQueryable to the rest of your project or not (see here, and here), and I've ultimately decided that I don't want to expose IQueryable to anything but my Model. Because IQueryable is tied to certain persistence implementations I don't like the idea of locking myself into this. Similarly, I'm not sure how good I feel about classes further down the call chain modifying the actual query that aren't in the repository. So, does anyone have any suggestions for how to write a clean and concise Repository without doing this? One problem I see, is my Repository will blow up from a ton of methods for various things I need to filter my query off of. Having a bunch of: IEnumerable GetProductsSinceDate(DateTime date); IEnumberable GetProductsByName(string name); IEnumberable GetProductsByID(int ID); If I was allowing IQueryable to be passed around I could easily have a generic repository that looked like: public interface IRepository<T> where T : class { T GetById(int id); IQueryable<T> GetAll(); void InsertOnSubmit(T entity); void DeleteOnSubmit(T entity); void SubmitChanges(); } However, if you aren't using IQueryable then methods like GetAll() aren't really practical since lazy evaluation won't be taking place down the line. I don't want to return 10,000 records only to use 10 of them later. What is the answer here? In Conery's MVC Storefront he created another layer called the "Service" layer which received IQueryable results from the respository and was responsible for applying various filters. Is this what I should do, or something similar? Have my repository return IQueryable but restrict access to it by hiding it behind a bunch of filter classes like GetProductByName, which will return a concrete type like IList or IEnumerable?

    Read the article

  • How to check whether a user belongs to an AD group and nested groups?

    - by elsharpo
    hi guys, I have an ASP.NET 3.5 application using Windows Authentication and implementing our own RoleProvider. Problem is we want to restrict access to a set of pages to a few thousand users and rathern than inputing all of those one by one we found out they belong to an AD group. The answer is simple if the common group we are checking membership against the particular user is a direct member of it but the problem I'm having is that if the group is a member of another group and then subsequently member of another group then my code always returns false. For example: Say we want to check whether User is a member of group E, but User is not a direct member of *E", she is a member of "A" which a member of "B" which indeed is a member of E, therefore User is a member of *E" One of the solutions we have is very slow, although it gives the correct answer using (var context = new PrincipalContext(ContextType.Domain)) { using (var group = GroupPrincipal.FindByIdentity(context, IdentityType.Name, "DL-COOL-USERS")) { var users = group.GetMembers(true); // recursively enumerate return users.Any(a => a.Name == "userName"); } } The original solution and what I was trying to get to work, using .NET 3.5 System.DirectoryServices.AccountManagement and it does work when users are direct members of the group in question is as follows: public bool IsUserInGroup(string userName, string groupName) { var cxt = new PrincipalContext(ContextType.Domain, "DOMAIN"); var user = UserPrincipal.FindByIdentity(cxt, IdentityType.SamAccountName, userName); if (user == null) { return false; } var group = GroupPrincipal.FindByIdentity(cxt, groupName); if (group == null) { return false; } return user.IsMemberOf(group); } The bottom line is, we need to check for membership even though the groups are nested in many levels down. Thanks a lot!

    Read the article

  • How do DP and CC change in Piet?

    - by Paul Butcher
    According to the specification, Black colour blocks and the edges of the program restrict program flow. If the Piet interpreter attempts to move into a black block or off an edge, it is stopped and the CC is toggled. The interpreter then attempts to move from its current block again. If it fails a second time, the DP is moved clockwise one step. These attempts are repeated, with the CC and DP being changed between alternate attempts. If after eight attempts the interpreter cannot leave its current colour block, there is no way out and the program terminates. Unless I'm reading it incorrectly, this is at odds with the behaviour of the Fibonacci sequence example here: http://www.dangermouse.net/esoteric/piet/fibbig1.gif (from: http://www.dangermouse.net/esoteric/piet/samples.html) Specifically, why does the DP turn left at (0,3) ((0,0) being (top, left)) when it hits the left edge? At this point, both DP and CC are LEFT, so, by my reading, the sequence should then be: Attempt (and fail) to leave the block by going off the edge at (0,4), Toggle CC to RIGHT, Attempt (and fail) to leave the block by going off the edge at (0,2). Rotate DP to UP, Attempt (and succeed) to leave the block at (1,2) by entering the white block at (1,1) The behaviour indicated by the trace seems to be that DP gets rotated all the way, leaving CC at LEFT. What have I misunderstood?

    Read the article

  • Percent-Encoded Percent in URI

    - by Lukas
    In our application, it is possible for a user to upload files then download them later. We don't restrict them from having any special characters in the file name. The problem comes in when we create the link for the user to download the file. I use the Java URL encoder to encode the file name that gets put into the href of the link, but I'm still having problems with percent (%) signs. For example, if the user uploads a file named fi%le.jpg, the href that gets generated is fi%25le.jpg, and everything is fine. The problem is when the percent sign is right before the period (i.e., file%.jpg, which gets converted to file%25.jpg). When the user clicks on the link, they get a 404 (Not Found) error. The strange thing is that it is not a problem if the two characters following the percent sign are hex characters.... Weird, eh? Any help is appreciated. I am using Tomcat/Struts. Could the built-in URL decoder have anything to do with this problem?

    Read the article

  • How to scrape Google SERP based on copyright year?

    - by Michael Mao
    Hi all: I know there must be ways to do this sort of things. I am not pro in RoR or Python, not even an expert in PHP. So my solution tends to be quite dumb: It uses a FireFox add-on called imarcos to scrape the target urls from Google SERP, and use PHP to store info into the database. At the very core of my workaround there lies a problem: How to specifically find target urls based on their copyright year? I mean, something like "copyright 1998-2006" in the footer is to be considered a target, but my search results are not 100% accurate. I used the following url to search : http://www.google.com.au/#hl=en&q=inurl:.com.au+intext:copyright+1995..2007+--2008+--2009&start=0&cad=b&fp=6a8119b094529f00 It reads : search for pages that have .com.au in URL and a copyright range from 1995 to 2007 exclude the year of 2008 or 2009. Starting position is 0, of course the offset can be changed. I've already done a dummy list and honestly I am not pleased with the result. That's mostly because I cannot find a way to restrict search terms in the exact order as they are entered into the search url. copyright can appear in anywhere on page and doesn't necessarily before the years, that's the current story. Is there a more clear way to sort out this? Oh, almost forgot to say the client doesn't wanna spent too much in this - I cannot persuade him simply buy some cool software, unfortunately. I hope there is a way to use clever Google search operators or similar things to go around this issue. Many thanks in advance!

    Read the article

  • How to assign .net membership roles to individual database records

    - by mdresser
    I'm developing a system where we want to restrict the availability of information displayed to users based on their roles. e.g. I have a tabled called EventType (ID, EventTypeDescription) which contains the following records: 1, 'Basic Event' 2, 'Intermediate Event' 3, 'Admin Event' What I need to achieve is to filter the records returned based on the username (and hence role) of the logged-in user. e.g if an advanced user is logged in they will see all the event types, if the standard user is logged in they will only see the basic event type etc. Ideally id like to do this in a way which can be easily extended to other tables as necessary. So I'd like to avoid simply adding a 'Roles' field to each table where the data is user context sensitive. One idea I'm thinking of is to create some kind of permissions table like: PermissionsTable ( ID, Aspnet_RoleId, TableName, PrimaryKeyValue ) this has the drawback of using this is obviously having to use the table name to switch which table to join onto. Edit: In the absence of any better suggestions, I'm going to go with the last idea I mentioned, but instead of having a TableName field, I'm going to normalise the TableName out to it's own table as follows: TableNames ( ID, TableName ) UserPermissionsTable ( ID, Aspnet_UserId, TableID, PrimaryKeyValue )

    Read the article

  • Need AngularJS grid resizing directive to resize "thumbnail" that contains no image

    - by thebravedave
    UPDATE Plunker to project: http://plnkr.co/edit/oKB96szQhqwpKQbOGUDw?p=preview I have an AngularJS project that uses AngularJS Bootstrap grids. I need all of the grid elements to have the same size so they stack properly. I created an angularJs directive that auto resizes the grid element when placed in said grid element. I have 2 directives that do this for me Directive 1: onload Directive 2: imageonload Directive 2 works. If the grid element uses an image, after the image loads then the directive triggers an event that sends the grid elements height to all other grid elements. If that height sent out via the event is greater than that of the grid element which is listening to the event then that listening grid element changes it's height to be the greater height. This way the largest height becomes the height for all the grid elements. Directive 1 does not work. This one is placed on the outer most grid elements html element, and is triggered when the element loads. The problem is that when the element loads and the onload directive is called AngularJS has not yet filled out the data in said grid element. The outcome is that the real height after AngularJS data binds is not broadcast as an event. My only solution I have thought of (but haven't tried) is to add an image url to an image that exists but doesn't have any data in it, and place that in the grid element (the one that didn't have any images before placing the blank one in). I could then call imageonload instead of onload and I pretty sure the angularjs data binding will have taken place by then. the problem is that that is pretty hacky. I would rather be able to have not an image in the grid element, and be able to call my custom onload directive and have the onload directive calculate the height AFTER angularJS data binds to all of the data binding variables in the grid element. Here is my imageonload directive .directive('imageonload', function($rootScope) { return { restrict: 'A', link: function(scope, element, attrs) { scope.heightArray = []; scope.largestHeight = 50; element.bind('load', function() { broadcastThumbnailHeight(); }); scope.$on('imageOnLoadEvent', function(caller, value){ var el = angular.element(element); var id = el.prop('id'); var pageName = el.prop('title'); if(pageName == value[0]){ if(scope.largestHeight < value[1]){ scope.largestHeight = value[1]; var nestedString = el.prop('alt'); if(nestedString == "") nestedString = "1"; var nested = parseInt(nestedString); nested = nested - 1; var inte = 0; var thumbnail = el["0"]; var finalThumbnailContainer = thumbnail.parentElement; while(inte != nested){ finalThumbnailContainer = finalThumbnailContainer.parentElement; inte++; } var innerEl = angular.element(finalThumbnailContainer); var height = value[1]; innerEl.height(height); } } }); scope.$on('findHeightAndBroadcast', function(){ broadcastThumbnailHeight(); }); scope.$on('resetThumbnailHeight', function(){ scope.largestHeight = 50; }); function broadcastThumbnailHeight(){ var el = angular.element(element); var id = el.prop('id'); var alt = el.prop('alt'); if(alt == "") alt = "1"; var nested = parseInt(alt); nested = nested - 1; var pageName = el.prop('title'); var inte = 1; var thumbnail = el["0"]; var finalThumbnail = thumbnail.parentElement; while(inte != nested){ finalThumbnail = finalThumbnail.parentElement; inte++; } var elZero = el["0"]; var clientHeight = finalThumbnail.clientHeight; var arr = []; arr[0] = pageName; arr[1] = clientHeight; $rootScope.$broadcast('imageOnLoadEvent', arr); } } }; }) And here is my onload directive .directive('onload', function($rootScope) { return { restrict: 'A', link: function(scope, element, attrs) { scope.largestHeight=100; getHeightAndBroadcast(); scope.$on('onLoadEvent', function(caller, value){ var el = angular.element(element); var id = el.prop('id'); var pageName = el.prop('title'); if(pageName == value[0]){ if(scope.largestHeight < value[1]){ scope.largestHeight = value[1]; var height = value[1]; el.height(height); } } }); function getHeightAndBroadcast(){ var el = angular.element(element); var h = el["0"].children; var thumbnailHeightElement = angular.element(h); var pageName = el.prop("title"); var clientHeight = thumbnailHeightElement["0"].clientHeight; var arr = []; arr[0] = pageName; arr[1] = clientHeight; if(clientHeight != undefined) $rootScope.$broadcast('onLoadEvent', arr); } } }; }) Here is an example of one of my grid elements that uses imageonload. Note the imageonload directive in the image html element. This works. There is also an onload directive on the outer most html of the grid element. That does not work. I have stepped through this carefully in Firebug and saw that the onload was calculating the height before AngularJS data binding was complete. <div class="thumbnail col-md-3" id="{{product.id}}" title="thumbnailAdminProductsGrid" onload> <div class="row"> <div class="containerz"> <div class="row-fluid"> <div class="col-md-2"></div> <div class="col-md-7"> <div class="textcenterinline"> <!--tag--><img class="img-responsive" id="{{product.id}}" title="imageAdminProductsGrid" alt=6 ng-src="{{product.baseImage}}" imageonload/><!--end tag--> </div> </div> </div> <div class="caption"> <div class="testing"> <div class="row-fluid"> <div class="col-md-12"> <h3 class=""> <!--tag--><a href="javascript:void(0);" ng-click="loadProductView('{{product.id}}')">{{product.name}}</a><!--end tag--> </h3> </div> </div> <div class="row-fluid"> <div class="col-md-12"> <p class="lead"><!--tag--> {{product.price}}</p><!--end tag--> </div> </div> <div class="row-fluid"> <div class="col-md-12"> <p><!--tag-->{{product.inStock}} units available<!--end tag--></p> </div> </div> <div class="row-fluid"> <div class="col-md-12"> <p class=""><!--tag-->{{product.generalDescription}}<!--end tag--></p> </div> </div> <!--tag--> <div data-ng-if="product.specialized=='true'"> <div class="row-fluid"> <div class="col-md-12" ng-repeat="varCat in product.varietyCategoriesAndOptions"> <b><h4>{{varCat.varietyCategoryName}}</h4></b> <select ng-model="varCat.varietyCategoryOption" ng-options="value.varietyCategoryOptionId as value.varietyCategoryOptionValue for (key,value) in varCat.varietyCategoryOptions"> </select> </div> </div> </div> <!--end tag--> <div class="row-fluid"> <div class="col-md-12"> <!--tag--><div ng-if="product.weight==0"><b>Free Shipping</b></div><!--end tag--> </div> </div> </div> </div> </div> </div> Here is an example of one of the html for one of my grid elements that only uses the "onload" directive and not "imageonload" <div class="thumbnail col-md-3" title="thumbnailCouponGrid" onload> <div class="innnerContainer"> <div class="text-center"> {{coupon.name}} <br /> <br /> <b>Description</b> <br /> {{coupon.description}} <br /> <br /> <button class="btn btn-large btn-primary" ng-click="goToCoupon()">View Coupon Details</button> </div> </div> The imageonload function might look a little confusing because I use the img html attribute "alt" to signal to the directive how many levels the imageonload is placed below the outermost html for the grid element. We have to have this so the directive knows which html element to set the new height on. also I use the "title" attribute to set which grid this grid resizing is for (that way you can use the directive multiple times on the same page for different grids and not have the events for the wrong grid triggered). Does anyone know how I can get the "onload" directive to get called AFTER angularJS binds to the grid element? Just for completeness here are 2 images (almost looks like just 1), the second is a grid that contains grid elements that have images and use the "imageonload" directive and the first is a grid that contains grid elements that do not use images and only uses the "onload" directive.

    Read the article

  • In git, how can I get the diff between two dates?

    - by Weidenrinde
    Basically, I am looking for something equivalent to cvs diff -D"1 day ago" -D"2010-02-29 11:11". While collecting more and more information, I found a solution. I paste it here, for others that might have similar problems. Things I have tried: In git, how can I get the diff between all the commits that occured between two dates? was ansered here with: git whatchanged --since="1 day ago" -p But this gives a diff for each commit, even if there are multiple commits in one file. I know that "date" is a bit of a loose concept in git, I thought there must be some way to do this. git diff 'master@{1 day ago}..master gives some warning warning: Log for 'master' only goes back to Tue, 16 Mar 2010 14:17:32 +0100. and does not show old diffs. git format-patch --since=yesterday --stdout does not give anything. revs=$(git log --pretty="format:%H" --since="1 day ago");git diff $(echo "$revs"|tail -n1) $(echo "$revs"|head -n1) works somehow, but seems complicated and does not restrict to the current branch. git diff $(git rev-list -n1 --before="1 day ago" master) seems to work and a default way to do similar things, although more complicated than I thought. Funnily, git-cvsserver does not support "cvs diff -D" (without that it is documented somewhere).

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >