Search Results

Search found 65558 results on 2623 pages for 'large data'.

Page 583/2623 | < Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >

  • Sending HTML Newsletters in a Batch Using SQL Server

    Sending a large volume of emails can put a strain on a mail relay server. However many people using SQL Server will need to do just that for things like newsletters, mailing lists, etc. Satnam Singh brings us a way to spread out the load by sending newsletters in multiple batches instead of one large process.

    Read the article

  • Undocumented Query Plans: The ANY Aggregate

    - by Paul White
    As usual, here’s a sample table: With some sample data: And an index that will be useful shortly: There’s a complete script to create the table and add the data at the end of this post.  There’s nothing special about the table or the data (except that I wanted to have some fun with values and data types). The Task We are asked to return distinct values of col1 and col2 , together with any one value from the thing column (it doesn’t matter which) per group.  One possible result set is shown...(read more)

    Read the article

  • Sync Google Calendar with SharePoint Calendar

    - by dataintegration
    The ADO.NET Providers for Google and SharePoint make it easy to retrieve and update data in both Google's web services and SharePoint. This article shows how the SQL interface to data makes it easy to build applications that need to move data from one source to another. The application described here is a demo Windows application that synchronizes calendar events between Google and SharePoint, but the RSSBus Providers can be used to achieve integrations on both the .NET and the Java platforms, including more sophisticated features like full automation. Getting the Events Step 1: Google accounts can have several calendars. Obtain a list of a user's Google Calendars by issuing a query to the Calendars table. For example: SELECT * FROM Calendars. Step 2: In order to get a list of the events from a given Google Calendar, issue a query to the CalendarEvents table while specifying the CalendarId from the Calendars table. The resulting events can be further filtered by using the StartDateTime or EndDateTime columns. For example: SELECT * FROM CalendarEvents WHERE (CalendarId = '[email protected]') AND (StartDateTime >= '1/1/2012') AND (StartDateTime <= '2/1/2012') Step 3: SharePoint stores data in Lists. There are various types of lists, e.g., document lists and calendar lists. A SharePoint account can have several lists of the same type. To find all the calendar lists in SharePoint, use the ListLists stored procedure and inspect the BaseTemplate column. Step 4: The SharePoint data provider models each SharPoint list as a table. Get the events in a particular calendar by querying the table with the same name as the list. The events may be filtered further by specifying the EventDate or EndDate columns. For example: SELECT * FROM Calendar WHERE (EventDate >= '1/1/2012') AND (EventDate <= '2/1/2012') Synchronizing the Events Synchronizing the events is a simple process. Once the events from Google and SharePoint are available they can be compared and synchronized based on user preference. The sample application does this based on user input, but it is easy to create one that does the synchronization automatically. The INSERT, UPDATE, and DELETE statements available in both data providers makes it easy to create, update, or delete events as needed. Pre-Built Demo Application The executable for the demo application can be downloaded here. Note that this demo is built using BETA builds of the ADO.NET Provider for Google V2 and ADO.NET Provider for SharePoint V2, and will expire in 2013. Source Code You can download the full source of the demo application here. You will need the Google ADO.NET Data Provider V2 and the SharePoint ADO.NET Data Provider V2, which can be obtained here.

    Read the article

  • Microsoft and Application Architectures

    Microsoft has dealt with several kinds of application architectures to include but not limited to desktop applications, web applications, operating systems, relational database systems, windows services, and web services. Because of the size and market share of Microsoft, virtually every modern language works with or around a Microsoft product. Some of the languages include: Visual Basic, VB.Net, C#, C++, C, ASP.net, ASP, HTML, CSS, JavaScript, Java and XML. From my experience, Microsoft strives to maintain an n-tier application standard where an application is comprised of multiple layers that perform specific functions, for example: presentation layer, business layer, data access layer are three general layers that just about every formally structured application contains. The presentation layer contains anything to do with displaying information to the screen and how it appears on the screen. The business layer is the middle man between the presentation layer and data access layer and transforms data from the data access layer in to useable information to be stored later or sent to an output device through the presentation layer. The data access layer does as its name implies, it allows the business layer to access data from a data source like MS SQL Server, XML, or another data source. One of my favorite technologies that Microsoft has come out with recently is the .Net Framework. This framework allows developers to code an application in multiple languages and compiles them in to one intermediate language called the Common Language Runtime (CLR). This allows VB and C# developers to work seamlessly together as if they were working in the same project. The only real disadvantage to using the .Net Framework is that it only natively runs on Microsoft operating systems. However, Microsoft does control a majority of the operating systems currently installed on modern computers and servers, especially with personal home computers. Given that the Microsoft .Net Framework is so flexible it is an ideal for business to develop applications around it as long as they wanted to commit to using Microsoft technologies and operating systems in the future. I have been a professional developer for about 9+ years now and have seen the .net framework work flawlessly in just about every instance I have used it. In addition, I have used it to develop web applications, mobile phone applications, desktop applications, web service applications, and windows service applications to name a few.

    Read the article

  • T-SQL Tuesday #006 Round-up!

    - by Mike C
    T-SQL Tuesday this month was all about LOB (large object) data. Thanks to all the great bloggers out there who participated! The participants this month posted some very impressive articles with information running the gamut from Reporting Services to SQL Server spatial data types to BLOB-handling in SSIS. One thing I noticed immediately was a trend toward articles about spatial data (SQL Server 2008 Geography and Geometry data types, a very fun topic to explore if you haven’t played around with...(read more)

    Read the article

  • How to explain bad software to non-technical people?

    - by mtutty
    In discussing software development with non-technical people (customers, business owners, project sponsors, etc.), I often resort to analogies and metaphors. It's relatively easy and effective to use a "house" or other metaphor for describing the size and complexity of new development. However, we often inherit someone else's code or data, and this approach doesn't seem to hold up as well when trying to explain why we're gutting something that already seems to work. Of course we can point to cycle time and cost to be saved in the future but this generally means nothing to business folks. I know doctors can say "just take this pill," but I'm not sure that software devs have the same authority. Ideas? EDIT: Let me add a bit to the discussion. The specific project I'm talking about has customers that don't realize (or care) about specific aspects of the system we're retiring (i.e., they think it was just fine): The system would save a NEW RECORD every time someone updated a field The system contained tables for reference data. These tables had new records added every day, even though they were duplicates of previous records. And there was no way to tie the reference data used for a particular case at the time it was closed. This is like 99% of the data in the old system. The field NAMES also have spaces, apostrophes and other inappropriate characters in them, making everything harder to work with. In addition to the incredible amount of duplicate data, they have around 1000 XLS files with data they want added to the system. Previously, they would do a spreadsheet for each case in the database, IN ADDITION TO what they typed into the database. Getting rid of this old, unneeded information and piping in the XLS data comprises about 80% of the total project effort, and was not something we could accurately predict. I'm trying to find a concrete way to describe how bad this thing was, mostly so that the customer will understand why the migration process has been so time-consuming. The actual coding was done pretty quickly and the new system works fine, but without the old data they won't be happy. Sorry to get into the weeds, but most of the answers I've seen so far are pretty basic scope/schedule/cost things. I've been doing this for 15 years, so this really is more of a reflective, philosophical question - but without some of the details it can be difficult to really appreciate the awful beauty of this problem.

    Read the article

  • Is Innovation Dead?

    - by wulfers
    My question is has innovation died?  For large businesses that do not have a vibrant, and fearless leadership (see Apple under Steve jobs), I think is has.  If you look at the organizational charts for many of the large corporate megaliths you will see a plethora of middle managers who are so risk averse that innovation (any change involves risk) is choked off since there are no innovation champions in the middle layers.  And innovation driven top down can only happen when you have a visionary in the top ranks, and that is also very rare.So where is actual innovation happening, at the bottom layer, the people who live in the trenches…   The people who live for a challenge. So how can big business leverage this innovation layer?  Remove the middle management layer.   Provide an innovation champion who has an R&D budget and is tasked with working with the bottom layer of a company, the engineers, developers  and business analysts that live on the edge (Where the corporate tires meet the road). Here are two innovation failures I will tell you about, and both have been impacted by a company so risk averse it is starting to fail in its primary business ventures: This company initiated an innovation process several years ago.  The process was driven companywide with team managers being the central points of collection of innovative ideas.  These managers were given no budget to do anything with these ideas.  There was no process or incentive for these managers to drive it about their team.  This lasted close to a year and the innovation program slowly slipped into oblivion…. A second example:  This same company failed an attempt to market a consumer product in a line where there was already a major market leader.  This product was under development for several years and needed to provide some major device differentiation form the current market leader.  This same company had a large Lead Technologist community made up of real innovators in all areas of technology.  Did this same company leverage the skills and experience of this internal community,   NO!!! So to wrap this up, if large companies really want to survive, then they need to start acting like a small company.  Support those innovators and risk takers!  Reward them by implementing their innovative ideas.  Champion (from the top down) innovation (found at the bottom) in your companies.  Remember if you stand still you are really falling behind.Do it now!  Take a risk!

    Read the article

  • Gigantic 2d maps?

    - by Jesper Hallenberg
    I've been thinking about a game idea for a week or two and the last few hours I was thinking of some technical stuff and came up with that the map would need to be 360,000x360,000 pixels in size. To me it sounds insane, but I've never done much in depth game development so I'm not sure if its reasonable or not to have a map that large. Basicly my question is, would it work to have a map that large in a 2D game?

    Read the article

  • OpenWorld - Database Security Demonstrations in Moscone South Left

    - by Troy Kitch
    All this week, Oracle security experts will be giving live product demos of Oracle Database Security solutions in Moscone South Left, in the Oracle DEMOgrounds for "database." Demonstrations include Oracle Database Defense-in-Depth Security, Database Application Data Redaction, Transparent Data Encryption, Oracle Audit Vault and Database Firewall, Data Masking and Data Subsetting. Don't miss it!

    Read the article

  • Right-Time Retail Part 1

    - by David Dorf
    This is the first in a three-part series. Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Right-Time Revolution Technology enables some amazing feats in retail. I can order flowers for my wife while flying 30,000 feet in the air. I can order my groceries in the subway and have them delivered later that day. I can even see how clothes look on me without setting foot in a store. Who knew that a TV, diamond necklace, or even a car would someday be as easy to purchase as a candy bar? Can technology make a mattress an impulse item? Wake-up and your back is hurting, so you rollover and grab your iPad, then a new mattress is delivered the next day. Behind the scenes the many processes are being choreographed to make the sale happen. This includes moving data between systems with the least amount for friction, which in some cases is near real-time. But real-time isn’t appropriate for all the integrations. Think about what a completely real-time retailer would look like. A consumer grabs toothpaste off the shelf, and all systems are immediately notified so that the backroom clerk comes running out and pushes the consumer aside so he can replace the toothpaste on the shelf. Such a system is not only cost prohibitive, but it’s also very inefficient and ineffectual. Retailers must balance the realities of people, processes, and systems to find the right speed of execution. That’ what “right-time retail” means. Retailers used to sell during the day and count the money and restock at night, but global expansion and the Web have complicated that simplistic viewpoint. Our 24hr society demands not only access but also speed, which constantly pushes the boundaries of our IT systems. In the last twenty years, there have been three major technology advancements that have moved us closer to real-time systems. Networking is the first technology that drove the real-time trend. As systems became connected, it became easier to move data between them. In retail we no longer had to mail the daily business report back to corporate each day as the dial-up modem could transfer the data. That was soon replaced with trickle-polling, when sale transactions were occasionally sent from stores to corporate throughout the day, often through VSAT. Then we got terrestrial networks like DSL and Ethernet that allowed the constant stream of data between stores and corporate. When corporate could see the sales transactions coming from stores, it could better plan for replenishment and promotions. That drove the need for speed into the supply chain and merchandising, but for many years those systems were stymied by the huge volumes of data. Nordstrom has 150 million SKU/Store combinations when planning (RPAS); The Gap generates 110 million price changes during end-of-season (RPM); Argos does 1.78 billion calculations executed each day for replenishment planning (AIP). These areas are now being alleviated by the second technology, storage. The typical laptop disk drive runs at 5,400rpm with PCs stepping up to 7,200rpm and servers hitting 15,000rpm. But the platters can only spin so fast, so to squeeze more performance we’ve had to rely on things like disk striping. Then solid state drives (SSDs) were introduced and prices continue to drop. (Augmenting your harddrive with a SSD is the single best PC upgrade these days.) RAM continues to be expensive, but compressing data in memory has allowed more efficient use. So a few years back, Oracle decided to build a box that incorporated all these advancements to move us closer to real-time. This family of products, often categorized as engineered systems, combines the hardware and software so that they work together to provide better performance. How much better? If Exadata powered a 747, you’d go from New York to Paris in 42 minutes, and it would carry 5,000 passengers. If Exadata powered baseball, games would last only 18 minutes and Boston’s Fenway would hold 370,000 fans. The Exa-family enables processing more data in less time. So with faster networks and storage, that brings us to the third and final ingredient. If we continue to process data in traditional ways, we won’t be able to take advantage of the faster networks and storage. Enter what Harvard calls “The Sexiest Job of the 21st Century” – the data scientist. New technologies like the Hadoop-powered Oracle Big Data Appliance, Oracle Advanced Analytics, and Oracle Endeca Information Discovery change the way in which we organize data. These technologies allow us to extract actionable information from raw data at incredible speeds, often ad-hoc. So the foundation to support the real-time enterprise exists, but how does a retailer begin to take advantage? The most visible way is through real-time marketing, but I’ll save that for part 3 and instead begin with improved integrations for the assets you already have in part 2.

    Read the article

  • Nails vs Screws (C# List vs Dictionary)

    - by MarkPearl
    General This may sound like a typical noob statement, but I’m finding out in a very real way that just because you have a solution to a problem, doesn’t necessarily mean it is the best solution. This was reiterated to me when a friend of mine suggested I look at using Dictionaries instead of Lists for a particular problem – he was right, I have always just assumed that because lists solved my problem I did not need to look elsewhere. So my new manifesto to counter this ageless problem is as follows… Look for a solution that will logically work Once you have a solution look for possible alternatives Decide why your current solution is the best approach compared to the alternatives If it is.. use it till something better comes along, if it isnt…. change What’s the difference between Lists & Dictionaries Both lists and dictionaries are used to store collections of data. Assume we had the following declarations… var dic = new Dictionary<string, long>(); var lst = new List<long>(); long data;   With a list, you simply add the item to the list and it will add the item to the end of the list. lst.Add(data); With a dictionary, you need to specify some sort of key and the data you want to add so that it can be uniquely identified. dic.Add(uniquekey, data);   Because with a dictionary you now have unique identifier, in the background they provide all sort’s of optimized algorithms to find your associated data. What this means is that if you are wanting to access your data it is a lot faster than a List. So when is it appropriate to use either class? For me, if I can guarantee that each item in my collection will have a unique identifier, then I will use Dictionaries instead of Lists as there is a considerable performance benefit when accessing each data item. If I cannot make this sort of guarantee, then by default I will use a list. I know this is all really basic, and I hope I haven’t missed some fundamental principle… If anyone would like to add their 2 cents, please feel free to do so…

    Read the article

  • M2M Solutions: The Move to Value Creation and the Internet of Things

    - by Javier Puerta
    There's a new Oracle-sponsored report available around big data, specifically machine to machine data (there will probably be more growth in m2m data than human-generated stuff like social media). Forbes published an article, Big Data Set to Explode as 40 Billion New Devices Connect to Internet, which references the report. Login to Download the M2M Solutions Report Good reading!

    Read the article

  • Batch change modified/created dates?

    - by Billiam
    I recently bought new hard drives for my NAS. This means that I'm copying all the data off the NAS, upgrading it, and then moving the data back. I've gotten as far as copying the data from the NAS, but every file's modified/created date has been changed to when it was copied (today). Is there a way, keeping in mind that I have the original data, to batch update the modified/created dates on the copied files without having to copy them over again (we're talking over a terabyte of data)?

    Read the article

  • Where ORMs blur the lines between code and data, how do you decide what logic should be a stored procedure, and what should be coded?

    - by PhonicUK
    Take the following pseudocode: CreateInvoiceAndCalculate(ItemsAndQuantities, DispatchAddress, User); And say CreateInvoice does the following: Create a new entry in an Invoices table belonging to the specified User to be sent to the given DispatchAddress. Create a new entry in an InvoiceItems table for each of the items in ItemsAndQuantities, storing the Item, the Quantity, and the cost of the item as of now (by looking it up from an Items table) Calculate the total amount of the invoice (ex shipping and taxes) and store it in the new Invoice row. At a glace you wouldn't be able to tell if this was a method in my applications code, or a stored procedure in the database that is being exposed as a function by the ORM. And to some extent it doesn't really matter. Now technically none of this is business logic. You're not making any decisions - just performing a calculation and creating records. However some may argue that because you are performing a calculation that affects the business (the total amount to be invoiced) that this isn't something that should be done in a stored procedure and instead should be in code. So for this specific example - why would it be more appropriate to do one or the other? And where do you draw the line? Or does it even particular matter as long as it's sufficiently well documented?

    Read the article

  • Reflecting on week long Training with SQLSkills

    - by NeilHambly
    Time for a quick reflection on my 5-day's training with SQLSkills, they have 4 weeks in their immersion training program, this was week 1: Internals & Performance held @ large Heathrow Hotel http://www.sqlskills.com/T_ImmersionInternalsDesign.asp So was the Course worth the Time and Money... undoubtedly, I believe we had a large number of the people there also self-funding along with the lucky corporate sponsored ones. It was akin to doing say the "London marathon" in that you know...(read more)

    Read the article

  • Raid superblock missing on single parition. Recovery needed!

    - by user171639
    Ok so I have a 2 TB raid 1 setup that has three partitions: sdc1: linux sdc2: swap sdc3: LVM for data However the LVM will no longer mount. So I thought that I would take the first drive, mount it in linux (ive done this b4), and reset the spare drive to copy the data. Normally I can mount a single drive for data recovery using: sudo su apt-get install mdadm lvm2 mdadm --assemble --scan modprobe dm-mod vgscan vgchange -ay c mount -o ro /dev/c/c /mnt Unfortunately, vgscan doesnot recognize the data partition. It appears as though the superblock on the first drive's data partition was erased while syncing with the second. So now I cannot mount that partition and the second drive is stuck in spare mode. Any ideas? Or a way to force mount the data partition just to copy the data? knoppix@Microknoppix:~$ sudo su root@Microknoppix:/home/knoppix# apt-get install mdadm lvm2 Reading package lists... Done Building dependency tree Reading state information... Done lvm2 is already the newest version. mdadm is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 551 not upgraded. root@Microknoppix:/home/knoppix# mdadm --assemble --scan mdadm: /dev/md/1 has been started with 1 drive (out of 2). mdadm: /dev/md/0 has been started with 1 drive (out of 2). root@Microknoppix:/home/knoppix# modprobe dm-mod root@Microknoppix:/home/knoppix# vgscan Reading all physical volumes. This may take a while... No volume groups found root@Microknoppix:/home/knoppix# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[2] 4193268 blocks super 1.2 [2/1] [U_] md1 : active raid1 sdc2[2] 524276 blocks super 1.2 [2/1] [U_] unused devices: <none> root@Microknoppix:/home/knoppix# mdadm -v --assemble --auto=yes /dev/md2 /dev/sdc3 mdadm: looking for devices for /dev/md2 mdadm: no recogniseable superblock on /dev/sdc3 mdadm: /dev/sdc3 has no superblock - assembly aborted root@Microknoppix:/home/knoppix# dumpe2fs /dev/md0 | grep -i superblock dumpe2fs 1.42.4 (12-Jun-2012) Primary superblock at 0, Group descriptors at 1-1 Backup superblock at 32768, Group descriptors at 32769-32769 Backup superblock at 98304, Group descriptors at 98305-98305 Backup superblock at 163840, Group descriptors at 163841-163841 Backup superblock at 229376, Group descriptors at 229377-229377 Backup superblock at 294912, Group descriptors at 294913-294913 Backup superblock at 819200, Group descriptors at 819201-819201 Backup superblock at 884736, Group descriptors at 884737-884737 root@Microknoppix:/home/knoppix# Notes: I can read the super block from the spare drive. I was gonna try and restore the superblock from one of the backups, but i dont know how or if this would work. I also heard creating a new array (mdadm --create) using the same parameters will not delete the data on the drive but i didnt want to risk it. Recommendations?

    Read the article

  • How to stretch the image from one screen to two screens

    - by wxiiir
    I want to be able to stretch the image from one monitor to a second monitor even if i have to use some software to do it. For example i want the top half of the image that is now shown on my first monitor to occupy the whole second monitor and i want the bottom half to occupy the whole first monitor, or the other way around i don't really care as long as it works. I would be cool to know how to do the same stuff but to the left half and right half. I know that image pixels would have twice the lenght or height this way but i don't really care about it as long as it works so basically i want the stuff to show on both monitors but with the same pixels as before. I have a hd4870 and windows7 and hd4000 family doesnt support having two monitors behaving like a large one, only hd5000 upwards, this would solve my problems without any of the drawbacks but it just can't be done (or maybe it can via software but i'm just too tired of searching). A solution to make almost any graphic card have two monitors behaving like a large one is matrox dualhead2go but that's just as expensive as a good hd5000 card so it's not worth it. thanks in advance EDIT I guess that nobody so far was able to fully comprehend my problem that was very explicitly written but i will elaborate some more. My hd4870 can have 2 monitors working with it but some stuff like games won't run on both monitors, which sucks. There are some ways to circumvent this problem and two of them are perfect or almost perfect but expensive and the third would be a software solution that would make it possible. The first one is to have and hd5000 family video card which will work just fine with both monitors. The second is to have a matrox dualhead2go that will make my hd4870 detect my two monitors as a large monitor. The third is to have a software that makes my two displays be detected as a large display and then captures the output of the video card, splits the images and renders them as 2d images to both monitors OR a simpler one but that would make outputted pixels double the width or height would be to capture the output of the graphics card to one screen, split it in two and enlarge it to fit both monitors and then output it to the monitors. p.s. By capturing the output of the video card i mean just make the video card process the stuff in a certain way. Making the video card detect two monitors as a large one via software may be a bit impossible or impracticable but stretching the output as a 2d image from one to both monitors for some coders should be a walk in the park so it would be likely that such program would exist or that some widespread softwares for dual monitor would have such function in them.

    Read the article

  • Importance of the SEO and Web Hosting Course

    The most important thing of having a website for your business is the ability of the site to generate a large number of traffic, hence it does not matter how well the site is designed or created. Even if the services or products that are offered by the business are of high quality, if the site is not able to generate a large number of traffic then surely you won't be able to sell anything.

    Read the article

  • Fusion Product Hub for Supply Chain Management

    Oracle Fusion Product Hub is a key component of Oracle's Supply Chain and Master Data Management strategy. Using a revolutionary approach to managing product master data management processes, Product Hub delivers: 1) A unified and accurate product definition that is harmonized within and across the enterprise value chain 2) Flexible and robust Data Governance workflows and policies to govern product master data 3) Product Dashboard and Embedded Analytics to enable informed and quick decisions

    Read the article

  • Understanding and Implementing a Force based graph layout algorithm

    - by zcourts
    I'm trying to implement a force base graph layout algorithm, based on http://en.wikipedia.org/wiki/Force-based_algorithms_(graph_drawing) My first attempt didn't work so I looked at http://blog.ivank.net/force-based-graph-drawing-in-javascript.html and https://github.com/dhotson/springy I changed my implementation based on what I thought I understood from those two but I haven't managed to get it right and I'm hoping someone can help? JavaScript isn't my strong point so be gentle... If you're wondering why write my own. In reality I have no real reason to write my own I'm just trying to understand how the algorithm is implemented. Especially in my first link, that demo is brilliant. This is what I've come up with //support function.bind - https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Function/bind#Compatibility if (!Function.prototype.bind) { Function.prototype.bind = function (oThis) { if (typeof this !== "function") { // closest thing possible to the ECMAScript 5 internal IsCallable function throw new TypeError("Function.prototype.bind - what is trying to be bound is not callable"); } var aArgs = Array.prototype.slice.call(arguments, 1), fToBind = this, fNOP = function () {}, fBound = function () { return fToBind.apply(this instanceof fNOP ? this : oThis || window, aArgs.concat(Array.prototype.slice.call(arguments))); }; fNOP.prototype = this.prototype; fBound.prototype = new fNOP(); return fBound; }; } (function() { var lastTime = 0; var vendors = ['ms', 'moz', 'webkit', 'o']; for(var x = 0; x < vendors.length && !window.requestAnimationFrame; ++x) { window.requestAnimationFrame = window[vendors[x]+'RequestAnimationFrame']; window.cancelAnimationFrame = window[vendors[x]+'CancelAnimationFrame'] || window[vendors[x]+'CancelRequestAnimationFrame']; } if (!window.requestAnimationFrame) window.requestAnimationFrame = function(callback, element) { var currTime = new Date().getTime(); var timeToCall = Math.max(0, 16 - (currTime - lastTime)); var id = window.setTimeout(function() { callback(currTime + timeToCall); }, timeToCall); lastTime = currTime + timeToCall; return id; }; if (!window.cancelAnimationFrame) window.cancelAnimationFrame = function(id) { clearTimeout(id); }; }()); function Graph(o){ this.options=o; this.vertices={}; this.edges={};//form {vertexID:{edgeID:edge}} } /** *Adds an edge to the graph. If the verticies in this edge are not already in the *graph then they are added */ Graph.prototype.addEdge=function(e){ //if vertex1 and vertex2 doesn't exist in this.vertices add them if(typeof(this.vertices[e.vertex1])==='undefined') this.vertices[e.vertex1]=new Vertex(e.vertex1); if(typeof(this.vertices[e.vertex2])==='undefined') this.vertices[e.vertex2]=new Vertex(e.vertex2); //add the edge if(typeof(this.edges[e.vertex1])==='undefined') this.edges[e.vertex1]={}; this.edges[e.vertex1][e.id]=e; } /** * Add a vertex to the graph. If a vertex with the same ID already exists then * the existing vertex's .data property is replaced with the @param v.data */ Graph.prototype.addVertex=function(v){ if(typeof(this.vertices[v.id])==='undefined') this.vertices[v.id]=v; else this.vertices[v.id].data=v.data; } function Vertex(id,data){ this.id=id; this.data=data?data:{}; //initialize to data.[x|y|z] or generate random number for each this.x = this.data.x?this.data.x:-100 + Math.random()*200; this.y = this.data.y?this.data.y:-100 + Math.random()*200; this.z = this.data.y?this.data.y:-100 + Math.random()*200; //set initial velocity to 0 this.velocity = new Point(0, 0, 0); this.mass=this.data.mass?this.data.mass:Math.random(); this.force=new Point(0,0,0); } function Edge(vertex1ID,vertex2ID){ vertex1ID=vertex1ID?vertex1ID:Math.random() vertex2ID=vertex2ID?vertex2ID:Math.random() this.id=vertex1ID+"->"+vertex2ID; this.vertex1=vertex1ID; this.vertex2=vertex2ID; } function Point(x, y, z) { this.x = x; this.y = y; this.z = z; } Point.prototype.plus=function(p){ this.x +=p.x this.y +=p.y this.z +=p.z } function ForceLayout(o){ this.repulsion = o.repulsion?o.repulsion:200; this.attraction = o.attraction?o.attraction:0.06; this.damping = o.damping?o.damping:0.9; this.graph = o.graph?o.graph:new Graph(); this.total_kinetic_energy =0; this.animationID=-1; } ForceLayout.prototype.draw=function(){ //vertex velocities initialized to (0,0,0) when a vertex is created //vertex positions initialized to random position when created cc=0; do{ this.total_kinetic_energy =0; //for each vertex for(var i in this.graph.vertices){ var thisNode=this.graph.vertices[i]; // running sum of total force on this particular node var netForce=new Point(0,0,0) //for each other node for(var j in this.graph.vertices){ if(thisNode!=this.graph.vertices[j]){ //net-force := net-force + Coulomb_repulsion( this_node, other_node ) netForce.plus(this.CoulombRepulsion( thisNode,this.graph.vertices[j])) } } //for each spring connected to this node for(var k in this.graph.edges[thisNode.id]){ //(this node, node its connected to) //pass id of this node and the node its connected to so hookesattraction //can update the force on both vertices and return that force to be //added to the net force this.HookesAttraction(thisNode.id, this.graph.edges[thisNode.id][k].vertex2 ) } // without damping, it moves forever // this_node.velocity := (this_node.velocity + timestep * net-force) * damping thisNode.velocity.x=(thisNode.velocity.x+thisNode.force.x)*this.damping; thisNode.velocity.y=(thisNode.velocity.y+thisNode.force.y)*this.damping; thisNode.velocity.z=(thisNode.velocity.z+thisNode.force.z)*this.damping; //this_node.position := this_node.position + timestep * this_node.velocity thisNode.x=thisNode.velocity.x; thisNode.y=thisNode.velocity.y; thisNode.z=thisNode.velocity.z; //normalize x,y,z??? //total_kinetic_energy := total_kinetic_energy + this_node.mass * (this_node.velocity)^2 this.total_kinetic_energy +=thisNode.mass*((thisNode.velocity.x+thisNode.velocity.y+thisNode.velocity.z)* (thisNode.velocity.x+thisNode.velocity.y+thisNode.velocity.z)) } cc+=1; }while(this.total_kinetic_energy >0.5) console.log(cc,this.total_kinetic_energy,this.graph) this.cancelAnimation(); } ForceLayout.prototype.HookesAttraction=function(v1ID,v2ID){ var a=this.graph.vertices[v1ID] var b=this.graph.vertices[v2ID] var force=new Point(this.attraction*(b.x - a.x),this.attraction*(b.y - a.y),this.attraction*(b.z - a.z)) // hook's attraction a.force.x += force.x; a.force.y += force.y; a.force.z += force.z; b.force.x += this.attraction*(a.x - b.x); b.force.y += this.attraction*(a.y - b.y); b.force.z += this.attraction*(a.z - b.z); return force; } ForceLayout.prototype.CoulombRepulsion=function(vertex1,vertex2){ //http://en.wikipedia.org/wiki/Coulomb's_law // distance squared = ((x1-x2)*(x1-x2)) + ((y1-y2)*(y1-y2)) + ((z1-z2)*(z1-z2)) var distanceSquared = ( (vertex1.x-vertex2.x)*(vertex1.x-vertex2.x)+ (vertex1.y-vertex2.y)*(vertex1.y-vertex2.y)+ (vertex1.z-vertex2.z)*(vertex1.z-vertex2.z) ); if(distanceSquared==0) distanceSquared = 0.001; var coul = this.repulsion / distanceSquared; return new Point(coul * (vertex1.x-vertex2.x),coul * (vertex1.y-vertex2.y), coul * (vertex1.z-vertex2.z)); } ForceLayout.prototype.animate=function(){ if(this.animating) this.animationID=requestAnimationFrame(this.animate.bind(this)); this.draw(); } ForceLayout.prototype.cancelAnimation=function(){ cancelAnimationFrame(this.animationID); this.animating=false; } ForceLayout.prototype.redraw=function(){ this.animating=true; this.animate(); } $(document).ready(function(){ var g= new Graph(); for(var i=0;i<=100;i++){ var v1=new Vertex(Math.random(), {}) var v2=new Vertex(Math.random(), {}) var e1= new Edge(v1.id,v2.id); g.addEdge(e1); } console.log(g); var l=new ForceLayout({ graph:g }); l.redraw(); });

    Read the article

  • Insert <div> outside every three <li>

    - by ignaty
    Hello. I have something like this: function cat_filter() { $.ajax({ type: "POST", url: 'json/cat_filter.aspx', data: "catId=" + "&styleId=" + "&colourId=" + "&sizeId=" + "&minPrice=" + "&maxPrice=", dataType: "json", beforeSend: function () { //load loading cursor }, success: function (data) { var CatItems = ""; for (var x = 0; x < data.PRODUCTS.length; x++) { CatItems += '<li class="jcarousel-item jcarousel-item-horizontal jcarousel-item-' + [x] + ' jcarousel-item-' + [x] + '-horizontal jcarousel-item-placeholder jcarousel-item-placeholder-horizontal"><a class="large_image" href="#"><img src="' + data.PRODUCTS[x].product_img + '" alt="' + data.PRODUCTS[x].product_name + '"></a><h3 class="geo_17_darkbrown">' + data.PRODUCTS[x].product_name + '</h3>'; if (data.PRODUCTS[x].product_onsale == 1) { CatItems += '<img alt="sale" src="assets/images/sale.gif" class="sale"><span class="geo_17_red_linethr">&pound;' + data.PRODUCTS[x].product_retailprice + '</span>&nbsp;&nbsp;<span class="price geo_17_darkbrown">&pound;' + data.PRODUCTS[x].product_webprice + '</span>'; } else { CatItems += '<span class="price geo_17_darkbrown">&pound;' + data.PRODUCTS[x].product_webprice + '</span>'; } if (data.PRODUCTS[x].product_COLOURS) { CatItems += '<span class="colour">'; for (var y = 0; y < data.PRODUCTS[x].product_COLOURS.length; y++) { CatItems += '<span><a href="' + data.PRODUCTS[x].product_COLOURS[y].colours_large + '"><img src="' + data.PRODUCTS[x].product_COLOURS[y].colours_thumb + '" alt="' + data.PRODUCTS[x].product_COLOURS[y].colour_name + '" /></a></span>'; } CatItems += '</span>'; } CatItems += '</li>'; } $('.carousel_00 ul').html(CatItems); }, complete: function () { //remove loading cursor } }); } This code generates this html: <div class="carousel_00"> <ul> <li><a href="#" class="large_image"><img src="assets/images/dress1.gif" alt="image"></a> <h3 class="geo_17_darkbrown">Rachel Dress</h3> <span class="price geo_17_darkbrown">&pound;89.99</span> <span class="colour"> <span><a href="assets/images/big_image_1.gif"><img src="assets/images/black.gif" alt="balck"></a></span> <span><img src="assets/images/brown.gif" alt="brown"></span> <span><img src="assets/images/purple.gif" alt="purple"></span> </span> </li> <li><a href="#"><img src="assets/images/dress2.gif" alt="image"></a> <h3 class="geo_17_darkbrown">Rachel Dress</h3> <span class="price geo_17_darkbrown">&pound;89.99</span> </li> <li><img class="sale" src="assets/images/sale.gif" alt="sale" /><a href="#"><img src="assets/images/dress3.gif" alt="image"></a> <h3 class="geo_17_darkbrown">Rachel Dress</h3> <span class="geo_17_red_linethr">&pound;99.99</span>&nbsp;&nbsp;<span class="price geo_17_darkbrown">&pound;89.99</span> </li> <li><a href="#"><img src="assets/images/dress1.gif" alt="image"></a> <h3 class="geo_17_darkbrown">Rachel Dress</h3> <span class="price geo_17_darkbrown">&pound;59.99</span> </li> <li><a href="#"><img src="assets/images/dress2.gif" alt="image"></a> <h3 class="geo_17_darkbrown">Rachel Dress</h3> <span class="price geo_17_darkbrown">&pound;89.99</span> </li> <li><a href="#"><img src="assets/images/dress3.gif" alt="image"></a> <h3 class="geo_17_darkbrown">Rachel Dress</h3> <span class="price geo_17_darkbrown">&pound;89.99</span> </li> <li><a href="#"><img src="assets/images/dress1.gif" alt="image"></a> <h3 class="geo_17_darkbrown">Rachel Dress</h3> <span class="price geo_17_darkbrown">&pound;89.99</span> </li> <li><a href="#"><img src="assets/images/dress2.gif" alt="image"></a> <h3 class="geo_17_darkbrown">Rachel Dress</h3> <span class="price geo_17_darkbrown">&pound;89.99</span> </li> <li><a href="#"><img src="assets/images/dress3.gif" alt="image"></a> <h3 class="geo_17_darkbrown">Rachel Dress</h3> <span class="price geo_17_darkbrown">&pound;89.99</span> </li> </ul></div> What I need is that every 3 li's will be in div /div. I know that this is not semantic and not right, but this is only for example. (Basically if I will figure put how to do this, I will replace li's on spans and that div that i need outside li's on li). Will be very glad if someone will help me. Because code that I have is already too much for me.

    Read the article

  • How do i call bash script function using exec function by passing parameter in php?

    - by Stan
    I have created a bash script that install magento in a cpanel. but i have a problem regarding the exec function. $function_path = Mage::getBaseDir()."/media/installer/function.sh"; exec("$function_path $db_host $db_name $db_user $db_pass $url $ad_user $ad_pass $ad_email"); This the bash shell script function.sh #!/bin/bash magento_detail $dbhost $dbname $dbuser $dbpass $url $admin_username $admin_password $admin_email function magento_detail() { stty erase '^?' echo "To install Magento, you will need a blank database ready with a user assigned to it." echo -n "Do you have all of your database information" dbinfo = "y" echo $dbinfo if [ "$dbinfo" -eq 'y' ] then echo "Database Host (usually localhost) : $dbhost " echo "Database Name : $dbname " echo "Database User : $dbuser " echo "Database Password : $dbpass " echo "Store Url : $url " echo "Admin Username : $admin_username " echo "Admin Password : $admin_password " echo "Admin Email Address : $admin_email " echo -n "Include Sample Data? (y/n) " echo sample = "y" if [ "$sample" -eq "y" ]; then echo echo "Now installing Magento with sample data..." echo echo "Downloading packages..." echo wget http://www.magentocommerce.com/downloads/assets/1.5.1.0/magento-1.5.1.0.tar.gz wget http://www.magentocommerce.com/downloads/assets/1.2.0/magento-sample-data-1.2.0.tar.gz echo echo "Extracting data..." echo tar -zxvf magento-1.5.1.0.tar.gz tar -zxvf magento-sample-data-1.2.0.tar.gz echo echo "Moving files..." echo mv magento-sample-data-1.2.0/media/* magento/media/ mv magento-sample-data-1.2.0/magento_sample_data_for_1.2.0.sql magento/data.sql mv magento/index.php magento/.htaccess ./$test1 echo echo "Setting permissions..." echo chmod o+w var var/.htaccess app/etc chmod -R o+w media echo echo "Importing sample products..." echo mysql -h $dbhost -u $dbuser -p$dbpass $dbname < data.sql echo echo "Initializing PEAR registry..." echo chmod 550 mage ./mage mage-setup . echo echo "Downloading packages..." echo echo echo "Cleaning up files..." echo rm -rf downloader/pearlib/cache/* downloader/pearlib/download/* rm -rf magento/ magento-sample-data-1.2.0/ rm -rf magento-1.5.1.0.tar.gz magento-sample-data-1.2.0.tar.gz data.sql rm -rf index.php.sample .htaccess.sample php.ini.sample LICENSE.txt STATUS.txt data.sql echo echo "Installing Magento..." echo php -f install.php --license_agreement_accepted "yes" --locale "en_US" --timezone "America/Los_Angeles" --default_currency "USD" --db_host "$dbhost" --db_name "$dbname" --db_user "$dbuser" --db_pass "$dbpass" --url "$url" --use_rewrites "yes" --use_secure "no" --secure_base_url "" --use_secure_admin "no" --admin_email "$admin_email" --admin_username "$admin_username" --admin_password "$admin_password" echo echo "Finished installing Magento" echo exit else echo "Now installing Magento without sample data..." echo echo "Downloading packages..." echo wget http://www.magentocommerce.com/downloads/assets/1.5.1.0/magento-1.5.1.0.tar.gz echo echo "Extracting data..." echo tar -zxvf magento-1.5.1.0.tar.gz echo echo "Moving files..." echo mv magento/* magento/.htaccess . echo echo "Setting permissions..." echo chmod o+w var var/.htaccess app/etc chmod -R o+w media echo echo "Initializing PEAR registry..." echo chmod 550 mage ./mage mage-setup . echo echo "Downloading packages..." echo echo echo "Cleaning up files..." echo rm -rf downloader/pearlib/cache/* downloader/pearlib/download/* rm -rf magento/ magento-1.5.1.0.tar.gz rm -rf index.php.sample .htaccess.sample php.ini.sample LICENSE.txt STATUS.txt echo echo "Installing Magento..." echo php -f install.php --license_agreement_accepted "yes" --locale "en_US" --timezone "America/Los_Angeles" --default_currency "USD" --db_host "$dbhost" --db_name "$dbname" --db_user "$dbuser" --db_pass "$dbpass" --url "$url" --use_rewrites "yes" --use_secure "no" --secure_base_url "" --use_secure_admin "no" --admin_email "$admin_email" --admin_username "$admin_username" --admin_password "$admin_password" echo echo "Finished installing Magento else part" exit fi else echo "Please setup a database first. Don't forget to assign a database user!" exit fi }` when i run this exec command,at that time it calls bash script function magento_installer() which contains arguments $db_host $db_name $db_user $db_pass $url $ad_user $ad_pass $ad_email. above arguments i'll pass in exec command to call magento_installer() function of bash script. so, is it right way of calling a bash script function? It directly goes to the last step of if condition and prints "Please setup a database first. Don't forget to assign a database user!". It cant enter it in if condition and directly goes to else condition. so please help me?

    Read the article

  • Abstracting the adding of click events to elements selected by class using jQuery

    - by baroquedub
    I'm slowly getting up to speed with jQuery and am starting to want to abstract my code. I'm running into problems trying to define click events at page load. In the code below, I'm trying to run through each div with the 'block' class and add events to some of its child elements by selecting them by class: <script language="javascript" type="text/javascript"> $(document).ready(function (){ $('HTML').addClass('JS'); // if JS enabled, hide answers $(".block").each(function() { problem = $(this).children('.problem'); button = $(this).children('.showButton'); problem.data('currentState', 'off'); button.click(function() { if ((problem.data('currentState')) == 'off'){ button.children('.btn').html('Hide'); problem.data('currentState', 'on'); problem.fadeIn('slow'); } else if ((problem.data('currentState')) == 'on'){ button.children('.btn').html('Solve'); problem.data('currentState', 'off'); problem.fadeOut('fast'); } return false; }); }); }); </script> <style media="all" type="text/css"> .JS div.problem{display:none;} </style> <div class="block"> <div class="showButton"> <a href="#" title="Show solution" class="btn">Solve</a> </div> <div class="problem"> <p>Answer 1</p> </div> </div> <div class="block"> <div class="showButton"> <a href="#" title="Show solution" class="btn">Solve</a> </div> <div class="problem"> <p>Answer 2</p> </div> </div> Unfortunately using this, only the last of the divs' button actually works. The event is not 'localised' (if that's the right word for it?) i.e. the event is only applied to the last $(".block") in the each method. So I have to laboriously add ids for each element and define my click events one by one. Surely there's a better way! Can anyone tell me what I'm doing wrong? And how I can get rid of the need for those IDs (I want this to work on dynamically generated pages where I might not know how many 'blocks' there are...) <script language="javascript" type="text/javascript"> $(document).ready(function (){ $('HTML').addClass('JS'); // if JS enabled, hide answers // Preferred version DOESN'T' WORK // So have to add ids to each element and laboriously set-up each one in turn... $('#problem1').data('currentState', 'off'); $('#showButton1').click(function() { if (($('#problem1').data('currentState')) == 'off'){ $('#showButton1 > a').html('Hide'); $('#problem1').data('currentState', 'on'); $('#problem1').fadeIn('slow'); } else if (($('#problem1').data('currentState')) == 'on'){ $('#showButton1 > a').html('Solve'); $('#problem1').data('currentState', 'off'); $('#problem1').fadeOut('fast'); } return false; }); $('#problem2').data('currentState', 'off'); $('#showButton2').click(function() { if (($('#problem2').data('currentState')) == 'off'){ $('#showButton2 > a').html('Hide'); $('#problem2').data('currentState', 'on'); $('#problem2').fadeIn('slow'); } else if (($('#problem2').data('currentState')) == 'on'){ $('#showButton2 > a').html('Solve'); $('#problem2').data('currentState', 'off'); $('#problem2').fadeOut('fast'); } return false; }); }); </script> <style media="all" type="text/css"> .JS div.problem{display:none;} </style> <div class="block"> <div class="showButton" id="showButton1"> <a href="#" title="Show solution" class="btn">Solve</a> </div> <div class="problem" id="problem1"> <p>Answer 1</p> </div> </div> <div class="block"> <div class="showButton" id="showButton2"> <a href="#" title="Show solution" class="btn">Solve</a> </div> <div class="problem" id="problem2"> <p>Answer 2</p> </div> </div>

    Read the article

  • Squid + Dans Guardian (simple configuration)

    - by The Digital Ninja
    I just built a new proxy server and compiled the latest versions of squid and dansguardian. We use basic authentication to select what users are allowed outside of our network. It seems squid is working just fine and accepts my username and password and lets me out. But if i connect to dans guardian, it prompts for username and password and then displays a message saying my username is not allowed to access the internet. Its pulling my username for the error message so i know it knows who i am. The part i get confused on is i thought that part was handled all by squid, and squid is working flawlessly. Can someone please double check my config files and tell me if i'm missing something or there is some new option i must set to get this to work. dansguardian.conf # Web Access Denied Reporting (does not affect logging) # # -1 = log, but do not block - Stealth mode # 0 = just say 'Access Denied' # 1 = report why but not what denied phrase # 2 = report fully # 3 = use HTML template file (accessdeniedaddress ignored) - recommended # reportinglevel = 3 # Language dir where languages are stored for internationalisation. # The HTML template within this dir is only used when reportinglevel # is set to 3. When used, DansGuardian will display the HTML file instead of # using the perl cgi script. This option is faster, cleaner # and easier to customise the access denied page. # The language file is used no matter what setting however. # languagedir = '/etc/dansguardian/languages' # language to use from languagedir. language = 'ukenglish' # Logging Settings # # 0 = none 1 = just denied 2 = all text based 3 = all requests loglevel = 3 # Log Exception Hits # Log if an exception (user, ip, URL, phrase) is matched and so # the page gets let through. Can be useful for diagnosing # why a site gets through the filter. on | off logexceptionhits = on # Log File Format # 1 = DansGuardian format 2 = CSV-style format # 3 = Squid Log File Format 4 = Tab delimited logfileformat = 1 # Log file location # # Defines the log directory and filename. #loglocation = '/var/log/dansguardian/access.log' # Network Settings # # the IP that DansGuardian listens on. If left blank DansGuardian will # listen on all IPs. That would include all NICs, loopback, modem, etc. # Normally you would have your firewall protecting this, but if you want # you can limit it to only 1 IP. Yes only one. filterip = # the port that DansGuardian listens to. filterport = 8080 # the ip of the proxy (default is the loopback - i.e. this server) proxyip = 127.0.0.1 # the port DansGuardian connects to proxy on proxyport = 3128 # accessdeniedaddress is the address of your web server to which the cgi # dansguardian reporting script was copied # Do NOT change from the default if you are not using the cgi. # accessdeniedaddress = 'http://YOURSERVER.YOURDOMAIN/cgi-bin/dansguardian.pl' # Non standard delimiter (only used with accessdeniedaddress) # Default is enabled but to go back to the original standard mode dissable it. nonstandarddelimiter = on # Banned image replacement # Images that are banned due to domain/url/etc reasons including those # in the adverts blacklists can be replaced by an image. This will, # for example, hide images from advert sites and remove broken image # icons from banned domains. # 0 = off # 1 = on (default) usecustombannedimage = 1 custombannedimagefile = '/etc/dansguardian/transparent1x1.gif' # Filter groups options # filtergroups sets the number of filter groups. A filter group is a set of content # filtering options you can apply to a group of users. The value must be 1 or more. # DansGuardian will automatically look for dansguardianfN.conf where N is the filter # group. To assign users to groups use the filtergroupslist option. All users default # to filter group 1. You must have some sort of authentication to be able to map users # to a group. The more filter groups the more copies of the lists will be in RAM so # use as few as possible. filtergroups = 1 filtergroupslist = '/etc/dansguardian/filtergroupslist' # Authentication files location bannediplist = '/etc/dansguardian/bannediplist' exceptioniplist = '/etc/dansguardian/exceptioniplist' banneduserlist = '/etc/dansguardian/banneduserlist' exceptionuserlist = '/etc/dansguardian/exceptionuserlist' # Show weighted phrases found # If enabled then the phrases found that made up the total which excedes # the naughtyness limit will be logged and, if the reporting level is # high enough, reported. on | off showweightedfound = on # Weighted phrase mode # There are 3 possible modes of operation: # 0 = off = do not use the weighted phrase feature. # 1 = on, normal = normal weighted phrase operation. # 2 = on, singular = each weighted phrase found only counts once on a page. # weightedphrasemode = 2 # Positive result caching for text URLs # Caches good pages so they don't need to be scanned again # 0 = off (recommended for ISPs with users with disimilar browsing) # 1000 = recommended for most users # 5000 = suggested max upper limit urlcachenumber = # # Age before they are stale and should be ignored in seconds # 0 = never # 900 = recommended = 15 mins urlcacheage = # Smart and Raw phrase content filtering options # Smart is where the multiple spaces and HTML are removed before phrase filtering # Raw is where the raw HTML including meta tags are phrase filtered # CPU usage can be effectively halved by using setting 0 or 1 # 0 = raw only # 1 = smart only # 2 = both (default) phrasefiltermode = 2 # Lower casing options # When a document is scanned the uppercase letters are converted to lower case # in order to compare them with the phrases. However this can break Big5 and # other 16-bit texts. If needed preserve the case. As of version 2.7.0 accented # characters are supported. # 0 = force lower case (default) # 1 = do not change case preservecase = 0 # Hex decoding options # When a document is scanned it can optionally convert %XX to chars. # If you find documents are getting past the phrase filtering due to encoding # then enable. However this can break Big5 and other 16-bit texts. # 0 = disabled (default) # 1 = enabled hexdecodecontent = 0 # Force Quick Search rather than DFA search algorithm # The current DFA implementation is not totally 16-bit character compatible # but is used by default as it handles large phrase lists much faster. # If you wish to use a large number of 16-bit character phrases then # enable this option. # 0 = off (default) # 1 = on (Big5 compatible) forcequicksearch = 0 # Reverse lookups for banned site and URLs. # If set to on, DansGuardian will look up the forward DNS for an IP URL # address and search for both in the banned site and URL lists. This would # prevent a user from simply entering the IP for a banned address. # It will reduce searching speed somewhat so unless you have a local caching # DNS server, leave it off and use the Blanket IP Block option in the # bannedsitelist file instead. reverseaddresslookups = off # Reverse lookups for banned and exception IP lists. # If set to on, DansGuardian will look up the forward DNS for the IP # of the connecting computer. This means you can put in hostnames in # the exceptioniplist and bannediplist. # It will reduce searching speed somewhat so unless you have a local DNS server, # leave it off. reverseclientiplookups = off # Build bannedsitelist and bannedurllist cache files. # This will compare the date stamp of the list file with the date stamp of # the cache file and will recreate as needed. # If a bsl or bul .processed file exists, then that will be used instead. # It will increase process start speed by 300%. On slow computers this will # be significant. Fast computers do not need this option. on | off createlistcachefiles = on # POST protection (web upload and forms) # does not block forms without any file upload, i.e. this is just for # blocking or limiting uploads # measured in kibibytes after MIME encoding and header bumph # use 0 for a complete block # use higher (e.g. 512 = 512Kbytes) for limiting # use -1 for no blocking #maxuploadsize = 512 #maxuploadsize = 0 maxuploadsize = -1 # Max content filter page size # Sometimes web servers label binary files as text which can be very # large which causes a huge drain on memory and cpu resources. # To counter this, you can limit the size of the document to be # filtered and get it to just pass it straight through. # This setting also applies to content regular expression modification. # The size is in Kibibytes - eg 2048 = 2Mb # use 0 for no limit maxcontentfiltersize = # Username identification methods (used in logging) # You can have as many methods as you want and not just one. The first one # will be used then if no username is found, the next will be used. # * proxyauth is for when basic proxy authentication is used (no good for # transparent proxying). # * ntlm is for when the proxy supports the MS NTLM authentication # protocol. (Only works with IE5.5 sp1 and later). **NOT IMPLEMENTED** # * ident is for when the others don't work. It will contact the computer # that the connection came from and try to connect to an identd server # and query it for the user owner of the connection. usernameidmethodproxyauth = on usernameidmethodntlm = off # **NOT IMPLEMENTED** usernameidmethodident = off # Preemptive banning - this means that if you have proxy auth enabled and a user accesses # a site banned by URL for example they will be denied straight away without a request # for their user and pass. This has the effect of requiring the user to visit a clean # site first before it knows who they are and thus maybe an admin user. # This is how DansGuardian has always worked but in some situations it is less than # ideal. So you can optionally disable it. Default is on. # As a side effect disabling this makes AD image replacement work better as the mime # type is know. preemptivebanning = on # Misc settings # if on it adds an X-Forwarded-For: <clientip> to the HTTP request # header. This may help solve some problem sites that need to know the # source ip. on | off forwardedfor = on # if on it uses the X-Forwarded-For: <clientip> to determine the client # IP. This is for when you have squid between the clients and DansGuardian. # Warning - headers are easily spoofed. on | off usexforwardedfor = off # if on it logs some debug info regarding fork()ing and accept()ing which # can usually be ignored. These are logged by syslog. It is safe to leave # it on or off logconnectionhandlingerrors = on # Fork pool options # sets the maximum number of processes to sporn to handle the incomming # connections. Max value usually 250 depending on OS. # On large sites you might want to try 180. maxchildren = 180 # sets the minimum number of processes to sporn to handle the incomming connections. # On large sites you might want to try 32. minchildren = 32 # sets the minimum number of processes to be kept ready to handle connections. # On large sites you might want to try 8. minsparechildren = 8 # sets the minimum number of processes to sporn when it runs out # On large sites you might want to try 10. preforkchildren = 10 # sets the maximum number of processes to have doing nothing. # When this many are spare it will cull some of them. # On large sites you might want to try 64. maxsparechildren = 64 # sets the maximum age of a child process before it croaks it. # This is the number of connections they handle before exiting. # On large sites you might want to try 10000. maxagechildren = 5000 # Process options # (Change these only if you really know what you are doing). # These options allow you to run multiple instances of DansGuardian on a single machine. # Remember to edit the log file path above also if that is your intention. # IPC filename # # Defines IPC server directory and filename used to communicate with the log process. ipcfilename = '/tmp/.dguardianipc' # URL list IPC filename # # Defines URL list IPC server directory and filename used to communicate with the URL # cache process. urlipcfilename = '/tmp/.dguardianurlipc' # PID filename # # Defines process id directory and filename. #pidfilename = '/var/run/dansguardian.pid' # Disable daemoning # If enabled the process will not fork into the background. # It is not usually advantageous to do this. # on|off ( defaults to off ) nodaemon = off # Disable logging process # on|off ( defaults to off ) nologger = off # Daemon runas user and group # This is the user that DansGuardian runs as. Normally the user/group nobody. # Uncomment to use. Defaults to the user set at compile time. # daemonuser = 'nobody' # daemongroup = 'nobody' # Soft restart # When on this disables the forced killing off all processes in the process group. # This is not to be confused with the -g run time option - they are not related. # on|off ( defaults to off ) softrestart = off maxcontentramcachescansize = 2000 maxcontentfilecachescansize = 20000 downloadmanager = '/etc/dansguardian/downloadmanagers/default.conf' authplugin = '/etc/dansguardian/authplugins/proxy-basic.conf' Squid.conf http_port 3128 hierarchy_stoplist cgi-bin ? acl QUERY urlpath_regex cgi-bin \? cache deny QUERY acl apache rep_header Server ^Apache #broken_vary_encoding allow apache access_log /squid/var/logs/access.log squid hosts_file /etc/hosts auth_param basic program /squid/libexec/ncsa_auth /squid/etc/userbasic.auth auth_param basic children 5 auth_param basic realm proxy auth_param basic credentialsttl 2 hours auth_param basic casesensitive off refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern . 0 20% 4320 acl NoAuthNec src <HIDDEN FOR SECURITY> acl BrkRm src <HIDDEN FOR SECURITY> acl Dials src <HIDDEN FOR SECURITY> acl Comps src <HIDDEN FOR SECURITY> acl whsws dstdom_regex -i .opensuse.org .novell.com .suse.com mirror.mcs.an1.gov mirrors.kernerl.org www.suse.de suse.mirrors.tds.net mirrros.usc.edu ftp.ale.org suse.cs.utah.edu mirrors.usc.edu mirror.usc.an1.gov linux.nssl.noaa.gov noaa.gov .kernel.org ftp.ale.org ftp.gwdg.de .medibuntu.org mirrors.xmission.com .canonical.com .ubuntu. acl opensites dstdom_regex -i .mbsbooks.com .bowker.com .usps.com .usps.gov .ups.com .fedex.com go.microsoft.com .microsoft.com .apple.com toolbar.msn.com .contacts.msn.com update.services.openoffice.org fms2.pointroll.speedera.net services.wmdrm.windowsmedia.com windowsupdate.com .adobe.com .symantec.com .vitalbook.com vxn1.datawire.net vxn.datawire.net download.lavasoft.de .download.lavasoft.com .lavasoft.com updates.ls-servers.com .canadapost. .myyellow.com minirick symantecliveupdate.com wm.overdrive.com www.overdrive.com productactivation.one.microsoft.com www.update.microsoft.com testdrive.whoson.com www.columbia.k12.mo.us banners.wunderground.com .kofax.com .gotomeeting.com tools.google.com .dl.google.com .cache.googlevideo.com .gpdl.google.com .clients.google.com cache.pack.google.com kh.google.com maps.google.com auth.keyhole.com .contacts.msn.com .hrblock.com .taxcut.com .merchantadvantage.com .jtv.com .malwarebytes.org www.google-analytics.com dcs.support.xerox.com .dhl.com .webtrendslive.com javadl-esd.sun.com javadl-alt.sun.com .excelsior.edu .dhlglobalmail.com .nessus.org .foxitsoftware.com foxit.vo.llnwd.net installshield.com .mindjet.com .mediascouter.com media.us.elsevierhealth.com .xplana.com .govtrack.us sa.tulsacc.edu .omniture.com fpdownload.macromedia.com webservices.amazon.com acl password proxy_auth REQUIRED acl all src all acl manager proto cache_object acl localhost src 127.0.0.1/255.255.255.255 acl to_localhost dst 127.0.0.0/8 acl SSL_ports port 443 563 631 2001 2005 8731 9001 9080 10000 acl Safe_ports port 80 # http acl Safe_ports port 21 # ftp acl Safe_ports port # https, snews 443 563 acl Safe_ports port 70 # gopher acl Safe_ports port 210 # wais acl Safe_ports port # unregistered ports 1936-65535 acl Safe_ports port 280 # http-mgmt acl Safe_ports port 488 # gss-http acl Safe_ports port 10000 acl Safe_ports port 631 acl Safe_ports port 901 # SWAT acl purge method PURGE acl CONNECT method CONNECT acl UTubeUsers proxy_auth "/squid/etc/utubeusers.list" acl RestrictUTube dstdom_regex -i youtube.com acl RestrictFacebook dstdom_regex -i facebook.com acl FacebookUsers proxy_auth "/squid/etc/facebookusers.list" acl BuemerKEC src 10.10.128.0/24 acl MBSsortnet src 10.10.128.0/26 acl MSNExplorer browser -i MSN acl Printers src <HIDDEN FOR SECURITY> acl SpecialFolks src <HIDDEN FOR SECURITY> # streaming download acl fails rep_mime_type ^.*mms.* acl fails rep_mime_type ^.*ms-hdr.* acl fails rep_mime_type ^.*x-fcs.* acl fails rep_mime_type ^.*x-ms-asf.* acl fails2 urlpath_regex dvrplayer mediastream mms:// acl fails2 urlpath_regex \.asf$ \.afx$ \.flv$ \.swf$ acl deny_rep_mime_flashvideo rep_mime_type -i video/flv acl deny_rep_mime_shockwave rep_mime_type -i ^application/x-shockwave-flash$ acl x-type req_mime_type -i ^application/octet-stream$ acl x-type req_mime_type -i application/octet-stream acl x-type req_mime_type -i ^application/x-mplayer2$ acl x-type req_mime_type -i application/x-mplayer2 acl x-type req_mime_type -i ^application/x-oleobject$ acl x-type req_mime_type -i application/x-oleobject acl x-type req_mime_type -i application/x-pncmd acl x-type req_mime_type -i ^video/x-ms-asf$ acl x-type2 rep_mime_type -i ^application/octet-stream$ acl x-type2 rep_mime_type -i application/octet-stream acl x-type2 rep_mime_type -i ^application/x-mplayer2$ acl x-type2 rep_mime_type -i application/x-mplayer2 acl x-type2 rep_mime_type -i ^application/x-oleobject$ acl x-type2 rep_mime_type -i application/x-oleobject acl x-type2 rep_mime_type -i application/x-pncmd acl x-type2 rep_mime_type -i ^video/x-ms-asf$ acl RestrictHulu dstdom_regex -i hulu.com acl broken dstdomain cms.montgomerycollege.edu events.columbiamochamber.com members.columbiamochamber.com public.genexusserver.com acl RestrictVimeo dstdom_regex -i vimeo.com acl http_port port 80 #http_reply_access deny deny_rep_mime_flashvideo #http_reply_access deny deny_rep_mime_shockwave #streaming files #http_access deny fails #http_reply_access deny fails #http_access deny fails2 #http_reply_access deny fails2 #http_access deny x-type #http_reply_access deny x-type #http_access deny x-type2 #http_reply_access deny x-type2 follow_x_forwarded_for allow localhost acl_uses_indirect_client on log_uses_indirect_client on http_access allow manager localhost http_access deny manager http_access allow purge localhost http_access deny purge http_access allow SpecialFolks http_access deny CONNECT !SSL_ports http_access allow whsws http_access allow opensites http_access deny BuemerKEC !MBSsortnet http_access deny BrkRm RestrictUTube RestrictFacebook RestrictVimeo http_access allow RestrictUTube UTubeUsers http_access deny RestrictUTube http_access allow RestrictFacebook FacebookUsers http_access deny RestrictFacebook http_access deny RestrictHulu http_access allow NoAuthNec http_access allow BrkRm http_access allow FacebookUsers RestrictVimeo http_access deny RestrictVimeo http_access allow Comps http_access allow Dials http_access allow Printers http_access allow password http_access deny !Safe_ports http_access deny SSL_ports !CONNECT http_access allow http_port http_access deny all http_reply_access allow all icp_access allow all access_log /squid/var/logs/access.log squid visible_hostname proxy.site.com forwarded_for off coredump_dir /squid/cache/ #header_access Accept-Encoding deny broken #acl snmppublic snmp_community mysecretcommunity #snmp_port 3401 #snmp_access allow snmppublic all cache_mem 3 GB #acl snmppublic snmp_community mbssquid #snmp_port 3401 #snmp_access allow snmppublic all

    Read the article

  • GAE formpreview

    - by Niklas R
    I'm trying to enable form preview with Google App Engine. Getting the following error message I suspect being mistaken somewhere: ... handler = handler_class() TypeError: __call__() takes at least 2 arguments (1 given) Can you tell what's wrong with my attempt? Here is some of the code. from django.contrib.formtools.preview import FormPreview class AFormPreview(FormPreview): def done(self, request, cleaned_data): # Do something with the cleaned_data, then redirect # to a "success" page. self.response.out.write('Done!') class AForm(djangoforms.ModelForm): text = forms.CharField(widget=forms.Textarea(attrs={'rows':'11','cols':'70','class':'foo'}),label=_("content").capitalize()) def clean(self): cleaned_data = self.clean_data name = cleaned_data.get("name") if not name: raise forms.ValidationError("No name.") # Always return the full collection of cleaned data. return cleaned_data class Meta: model = A fields = ['category','currency','price','title','phonenumber','postaladress','name','text','email'] #change the order ... ('/aformpreview/([^/]*)', AFormPreview(AForm)), UPDATE: Here's a complete app where the preview is not working. Any ideas are most welcome: import cgi from google.appengine.api import users from google.appengine.ext import db from google.appengine.ext import webapp from google.appengine.ext.webapp import template from google.appengine.ext.webapp.util import run_wsgi_app from google.appengine.ext.db import djangoforms class Item(db.Model): name = db.StringProperty() quantity = db.IntegerProperty(default=1) target_price = db.FloatProperty() priority = db.StringProperty(default='Medium',choices=[ 'High', 'Medium', 'Low']) entry_time = db.DateTimeProperty(auto_now_add=True) added_by = db.UserProperty() class ItemForm(djangoforms.ModelForm): class Meta: model = Item exclude = ['added_by'] from django.contrib.formtools.preview import FormPreview class ItemFormPreview(FormPreview): def done(self, request, cleaned_data): # Do something with the cleaned_data, then redirect # to a "success" page. return HttpResponseRedirect('/') class MainPage(webapp.RequestHandler): def get(self): self.response.out.write('<html><body>' '<form method="POST" ' 'action="/">' '<table>') # This generates our shopping list form and writes it in the response self.response.out.write(ItemForm()) self.response.out.write('</table>' '<input type="submit">' '</form></body></html>') def post(self): data = ItemForm(data=self.request.POST) if data.is_valid(): # Save the data, and redirect to the view page entity = data.save(commit=False) entity.added_by = users.get_current_user() entity.put() self.redirect('/items.html') else: # Reprint the form self.response.out.write('<html><body>' '<form method="POST" ' 'action="/">' '<table>') self.response.out.write(data) self.response.out.write('</table>' '<input type="submit">' '</form></body></html>') class ItemPage(webapp.RequestHandler): def get(self): query = db.GqlQuery("SELECT * FROM Item ORDER BY name") for item in query: self.response.out.write('<a href="/edit?id=%d">Edit</a> - ' % item.key().id()) self.response.out.write("%s - Need to buy %d, cost $%0.2f each<br>" % (item.name, item.quantity, item.target_price)) class EditPage(webapp.RequestHandler): def get(self): id = int(self.request.get('id')) item = Item.get(db.Key.from_path('Item', id)) self.response.out.write('<html><body>' '<form method="POST" ' 'action="/edit">' '<table>') self.response.out.write(ItemForm(instance=item)) self.response.out.write('</table>' '<input type="hidden" name="_id" value="%s">' '<input type="submit">' '</form></body></html>' % id) def post(self): id = int(self.request.get('_id')) item = Item.get(db.Key.from_path('Item', id)) data = ItemForm(data=self.request.POST, instance=item) if data.is_valid(): # Save the data, and redirect to the view page entity = data.save(commit=False) entity.added_by = users.get_current_user() entity.put() self.redirect('/items.html') else: # Reprint the form self.response.out.write('<html><body>' '<form method="POST" ' 'action="/edit">' '<table>') self.response.out.write(data) self.response.out.write('</table>' '<input type="hidden" name="_id" value="%s">' '<input type="submit">' '</form></body></html>' % id) def main(): application = webapp.WSGIApplication( [('/', MainPage), ('/edit', EditPage), ('/items.html', ItemPage), ('/itemformpreview', ItemFormPreview(ItemForm)), ], debug=True) run_wsgi_app(application)

    Read the article

< Previous Page | 579 580 581 582 583 584 585 586 587 588 589 590  | Next Page >