Search Results

Search found 2676 results on 108 pages for 'spam blocking'.

Page 36/108 | < Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >

  • Qt Socket blocking functions required to run in QThread where created. Any way past this?

    - by Alexander Kondratskiy
    The title is very cryptic, so here goes! I am writing a client that behaves in a very synchronous manner. Due to the design of the protocol and the server, everything has to happen sequentially (send request, wait for reply, service reply etc.), so I am using blocking sockets. Here is where Qt comes in. In my application I have a GUI thread, a command processing thread and a scripting engine thread. I create the QTcpSocket in the command processing thread, as part of my Client class. The Client class has various methods that boil down to writing to the socket, reading back a specific number of bytes, and returning a result. The problem comes when I try to directly call Client methods from the scripting engine thread. The Qt sockets randomly time out and when using a debug build of Qt, I get these warnings: QSocketNotifier: socket notifiers cannot be enabled from another thread QSocketNotifier: socket notifiers cannot be disabled from another thread Anytime I call these methods from the command processing thread (where Client was created), I do not get these problems. To simply phrase the situation: Calling blocking functions of QAbstractSocket, like waitForReadyRead(), from a thread other than the one where the socket was created (dynamically allocated), causes random behaviour and debug asserts/warnings. Anyone else experienced this? Ways around it? Thanks in advance.

    Read the article

  • How to interrupt a thread performing a blocking socket connect?

    - by Jason R
    I have some code that spawns a pthread that attempts to maintain a socket connection to a remote host. If the connection is ever lost, it attempts to reconnect using a blocking connect() call on its socket. Since the code runs in a separate thread, I don't really care about the fact that it uses the synchronous socket API. That is, until it comes time for my application to exit. I would like to perform some semblance of an orderly shutdown, so I use thread synchronization primitives to wake up the thread and signal for it to exit, then perform a pthread_join() on the thread to wait for it to complete. This works great, unless the thread is in the middle of a connect() call when I command the shutdown. In that case, I have to wait for the connect to time out, which could be a long time. This makes the application appear to take a long time to shut down. What I would like to do is to interrupt the call to connect() in some way. After the call returns, the thread will notice my exit signal and shut down cleanly. Since connect() is a system call, I thought that I might be able to intentionally interrupt it using a signal (thus making the call return EINTR), but I'm not sure if this is a robust method in a POSIX threads environment. Does anyone have any recommendations on how to do this, either using signals or via some other method? As a note, the connect() call is down in some library code that I cannot modify, so changing to a non-blocking socket is not an option.

    Read the article

  • JAXB adding namespace to parent but not to the child elements contained

    - by Nishant
    I put together an XSD and used JAXB to generate classes out of it. Here are my XSDs- myDoc.xsd : <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns="http://www.mydoc.org" targetNamespace="http://www.mydoc.org" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:mtp="http://www.mytypes.com" elementFormDefault="qualified"> <xs:import namespace="http://www.mytypes.com" schemaLocation="mytypes.xsd" /> <xs:element name="myDoc"> <xs:complexType> <xs:sequence> <xs:element name="crap" type="xs:string"/> <xs:element ref="mtp:foo"/> <xs:element ref="mtp:bar"/> </xs:sequence> </xs:complexType> </xs:element> mytypes.xsd <?xml version="1.0" encoding="UTF-8"?> <xs:schema targetNamespace="http://www.mytypes.com" xmlns="http://www.mytypes.com" xmlns:tns="http://www.mytypes.com" xmlns:xs="http://www.w3.org/2001/XMLSchema" attributeFormDefault="qualified" elementFormDefault="qualified"> <xs:element name="foo" type="tns:Foo"/> <xs:element name="bar" type="tns:Bar"/> <xs:element name="spam" type="tns:Spam"/> <xs:simpleType name="Foo"> <xs:restriction base="xs:string"></xs:restriction> </xs:simpleType> <xs:complexType name="Bar"> <xs:sequence> <xs:element ref="spam"/> </xs:sequence> </xs:complexType> <xs:simpleType name="Spam"> <xs:restriction base="xs:string" /> </xs:simpleType> </xs:schema> The document marshalled is- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <myDoc xmlns:ns2="http://www.mytypes.com"> <crap>real crap</crap> <ns2:foo>bleh</ns2:foo> <ns2:bar> <spam>blah</spam> </ns2:bar> </myDoc> Note that the <spam> element uses the default namespace. I would like it to use the ns2 namespace. The schema (mytypes.xsd) expresses the fact that <spam> is contained within <bar> which in the XML instance is bound to the ns2 namespace. I've broken my head over this for over a week and I would like ns2 prefix to appear in <spam>. What should I do? Required : <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <myDoc xmlns:ns2="http://www.mytypes.com"> <crap>real crap</crap> <ns2:foo>bleh</ns2:foo> <ns2:bar> <ns2:spam>blah</ns2:spam><!--NS NS NS--> </ns2:bar> </myDoc>

    Read the article

  • JAXB adding namespace to parent but not to the child elements contained

    - by Nishant
    I put together an XSD and used JAXB to generate classes out of it. Here are my XSDs- myDoc.xsd : <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns="http://www.mydoc.org" targetNamespace="http://www.mydoc.org" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:mtp="http://www.mytypes.com" elementFormDefault="qualified"> <xs:import namespace="http://www.mytypes.com" schemaLocation="mytypes.xsd" /> <xs:element name="myDoc"> <xs:complexType> <xs:sequence> <xs:element name="crap" type="xs:string"/> <xs:element ref="mtp:foo"/> <xs:element ref="mtp:bar"/> </xs:sequence> </xs:complexType> </xs:element> mytypes.xsd <?xml version="1.0" encoding="UTF-8"?> <xs:schema targetNamespace="http://www.mytypes.com" xmlns="http://www.mytypes.com" xmlns:tns="http://www.mytypes.com" xmlns:xs="http://www.w3.org/2001/XMLSchema" attributeFormDefault="qualified" elementFormDefault="qualified"> <xs:element name="foo" type="tns:Foo"/> <xs:element name="bar" type="tns:Bar"/> <xs:element name="spam" type="tns:Spam"/> <xs:simpleType name="Foo"> <xs:restriction base="xs:string"></xs:restriction> </xs:simpleType> <xs:complexType name="Bar"> <xs:sequence> <xs:element ref="spam"/> </xs:sequence> </xs:complexType> <xs:simpleType name="Spam"> <xs:restriction base="xs:string" /> </xs:simpleType> </xs:schema> The document marshalled is- <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <myDoc xmlns:ns2="http://www.mytypes.com"> <crap>real crap</crap> <ns2:foo>bleh</ns2:foo> <ns2:bar> <spam>blah</spam> </ns2:bar> </myDoc> Note that the <spam> element uses the default namespace. I would like it to use the ns2 namespace. The schema (mytypes.xsd) expresses the fact that <spam> is contained within <bar> which in the XML instance is bound to the ns2 namespace. I've broken my head over this for over a week and I would like ns2 prefix to appear in <spam>. What should I do? Required : <?xml version="1.0" encoding="UTF-8" standalone="yes"?> <myDoc xmlns:ns2="http://www.mytypes.com"> <crap>real crap</crap> <ns2:foo>bleh</ns2:foo> <ns2:bar> <ns2:spam>blah</ns2:spam><!--NS NS NS--> </ns2:bar> </myDoc>

    Read the article

  • How can I execute a non-blocking System.Beep()?

    - by Siracuse
    In C# I can perform a Console.Beep(). However, if you specify a duration of say 1000, or 1 second, it will not execute the next line of code until that second passes. Is there any way possible to execute Console.Beep() in a non-blocking fashion so it will continue to beep and still continue executing the code below it while beeping?

    Read the article

  • Sending test emails in development without spam or rejection issues.

    - by Micah Burnett
    I run my development environment in a VM and need to test the delivery and appearance of emails from my applications. The problem is when my SMTP server starts delivering a lot of mail to my corporate email account, the server is soon rejected as a source of spam. Of course, the major Internet email providers will also never accept email from such a server. I've delivered to a specified pickup directory and open in outlook express, but the problem is images always display as broken images.

    Read the article

  • How can I execute a non-blocking System.Beep() in C#?

    - by Siracuse
    In C# I can perform a Console.Beep(). However, if you specify a duration of say 1000, or 1 second, it will not execute the next line of code until that second passes. Is there any way possible to execute Console.Beep() in a non-blocking fashion so it will continue to beep and still continue executing the code below it while beeping?

    Read the article

  • Captcha in my Joomla site it not blocking spam robots.

    - by jax
    In my joomla install I have removed the email registration and instead added a Captcha field to the PHP code using the recaptcha.net method. For some reason I am still getting what I think are spam users (robots) but I don't know how they would get around the Captcha field. Anything I should check?

    Read the article

  • ActionScript/Flex: Augment MouseEvents with extra information

    - by David Wolever
    I've got a business class, Spam and the corresponding view class, SpamView. How can I augment MouseEvents coming out of SpamView so the MouseEvents which come out of it contain a reference to the instance of Spam which the SpamView is displaying? Here's how I'd like to use it: class ViewContainer { ... for each (spam in spams) { addChild(new SpamView(spam)); ... function handleMouseMove(event:MouseEvent) { if (event is SpamViewMouseEvent) trace("The mouse is being moved over spam:", spam) } } Thanks! Things I've considered which don't work: Adding event listeners to each SpamView: the book keeping (making sure that they are added/removed properly) is a pain. Using event.target: the event's target may be a child of the SpamView (which isn't very useful) Listening for a MouseEvent, creating a new SpamViewMouseEvent, copying all the fields over, then dispatching that: copying all the fields manually is also a pain.

    Read the article

  • Right design to validate attributes of a class instance

    - by systempuntoout
    Having a simple Python class like this: class Spam(object): __init__(self, description, value): self.description = description self.value = value Which is the correct approach to check these constraints: "description cannot be empty" "value must be greater than zero" Should i: 1.validate data before creating spam object ? 2.check data on __init__ method ? 3.create an is_valid method on Spam class and call it with spam.isValid() ? 4.create an is_valid static method on Spam class and call it with Spam.isValid(description, value) ? 5.check data on setters? 6.... Could you recommend a well designed\Pythonic\not verbose (on class with many attributes)\elegant approach?

    Read the article

  • Correct approach to validate attributes of an instance of class

    - by systempuntoout
    Having a simple Python class like this: class Spam(object): __init__(self, description, value): self.description = description self.value = value Which is the correct approach to check these constraints: "description cannot be empty" "value must be greater than zero" Should i: 1.validate data before creating spam object ? 2.check data on __init__ method ? 3.create an is_valid method on Spam class and call it with spam.isValid() ? 4.create an is_valid static method on Spam class and call it with Spam.isValid(description, value) ? 5.check data on setters declaration ? 6.... Could you recommend a well designed\Pythonic\not verbose (on class with many attributes)\elegant approach?

    Read the article

  • Python - How can I make this code asynchronous?

    - by dave
    Here's some code that illustrates my problem: def blocking1(): while True: yield 'first blocking function example' def blocking2(): while True: yield 'second blocking function example' for i in blocking1(): print 'this will be shown' for i in blocking2(): print 'this will not be shown' I have two functions which contain while True loops. These will yield data which I will then log somewhere (most likely, to an sqlite database). I've been playing around with threading and have gotten it working. However, I don't really like it... What I would like to do is make my blocking functions asynchronous. Something like: def blocking1(callback): while True: callback('first blocking function example') def blocking2(callback): while True: callback('second blocking function example') def log(data): print data blocking1(log) blocking2(log) How can I achieve this in Python? I've seen the standard library comes with asyncore and the big name in this game is Twisted but both of these seem to be used for socket IO. How can I async my non-socket related, blocking functions?

    Read the article

  • ajax request internal server error

    - by joe
    Everything is working good on local but when i try same codes in production, i get 500 (Internal Server Error) error. entries.controller def set_spam @entry = Entry.find(params[:entry_id]) @entry.spam = params[:what] == "spam" ? true : false @entry.save respond_to do |format| format.js end end application.js $(".entry-actions .spams img").click(function () { $.post("/set-spam", { entry_id: $(this).attr("entry_id"), what: $(this).attr("class") } ); return false; }); view <div class="spams"> <img title="spam" class="spam" src="/images/pixel.gif" entry_id="<%= entry.id %>" /> </div> route post "/set-spam" => "entries#set_spam"

    Read the article

  • How to properly configure personal domain to send emails and pass spam filters? Is email forwarding enough?

    - by ChocoDeveloper
    I'm using my own domain from Namecheap, and another company for the mail hosting for my personal email. I configured my domain to forward *@mydomain.com to the account I was given in the mail hosting company. I can send and receive emails, but I'm wondering if the emails I send are being flagged as spam sometimes. I remember when I used my own mail server years ago, there were mechanisms for my domain to say "this mail server is allowed to send emails as [email protected]", like adding a TXT record or something. So the questions are: Is email forwarding enough? Will mail servers understand that the mail server is allowed to send emails on my behalf? Is there a testing mail server where I can send an email and be told whether it thinks it's spam?

    Read the article

  • spamassassin setup how to make sure X-Spam-Status is allways written.

    - by DoviG
    Hi, I just found out that spamassassin skips checking for email bigger than 250KB by default. Due to a coding bug, I check for the X-Spam-Status header in incoming emails and did not take into account the fact that it might be null. I know that I can increase the size of the limit by configuration but it may cause a load issue on my server. Since I do not want to redeploy my application at this time I was wondering if there is a way to make sure this header exists automatically in every email, either by spamassassin configuration or by postfix or something else. Thanks, Dov.

    Read the article

  • Will news ticker using overflow:hidden cause Google to see site as spam?

    - by molipix
    In the hope of tempting Googlebot with fresh content, I've implemented a homepage news ticker which displays the 20 most recent headlines on our site. The implementation I have chosen is a <ul> with each headline being a <li> Initially all the <li> elements have no style but Javascript kicks in on page load and gives all but one of them a display="style:none" attribute. Javascript then displays each of the other 19 headlines in a loop. So far so good. However, in order to prevent a visually unplesant page load where the 20 items display and then immediately collapse, I am using overflow:hidden on the <ul> element. Anyone got a view on what Googlebot is likely to make of this? Does the fact that I'm using overflow:hidden make the content look like spam?

    Read the article

  • Is there any reason for a blocking call to winsock send() function on Vista to return immediately ?

    - by ivymike
    Hi All, Is there any reason for a blocking call to winsock's send() function on Vista to return immediately ? It works with expected delay on XP and below. I'm wondering if this has got anything to do with auto-tuning feature of Vista. Code: char *pBuffer; // pointer to data int bytes; // total size int i = 0, j=0; while (i < bytes) { j = send(m_sock, pBuffer+i, bytes-i, 0); i+=j; } Thanks, Pavan

    Read the article

  • Why is this mail going straight to SPAM box?

    - by ththat
    I am using the following script to send mail <? extract($_POST); $subject = "Feedback from ".$name." (".$email.", Ph: ".$phone.")"; $mail = @mail($send,$subject,$content); if($mail) { echo "Your feedback has been sent"; } else { echo "We are sorry for the inconvienience, but we could not send your feedback now."; } ?> But this is always ending up in the spam Folder. Why?

    Read the article

  • Troubleshooting Application Timeouts in SQL Server

    - by Tara Kizer
    I recently received the following email from a blog reader: "We are having an OLTP database instance, using SQL Server 2005 with little to moderate traffic (10-20 requests/min). There are also bulk imports that occur at regular intervals in this DB and the import duration ranges between 10secs to 1 min, depending on the data size. Intermittently (2-3 times in a week), we face an issue, where queries get timed out (default of 30 secs set in application). On analyzing, we found two stored procedures, having queries with multiple table joins inside them of taking a long time (5-10 mins) in getting executed, when ideally the execution duration ranges between 5-10 secs. Execution plan of the same displayed Clustered Index Scan happening instead of Clustered Index Seek. All required Indexes are found to be present and Index fragmentation is also minimal as we Rebuild Indexes regularly alongwith Updating Statistics. With no other alternate options occuring to us, we restarted SQL server and thereafter the performance was back on track. But sometimes it was still giving timeout errors for some hits and so we also restarted IIS and that stopped the problem as of now." Rather than respond directly to the blog reader, I thought it would be more interesting to share my thoughts on this issue in a blog. There are a few things that I can think of that could cause abnormal timeouts: Blocking Bad plan in cache Outdated statistics Hardware bottleneck To determine if blocking is the issue, we can easily run sp_who/sp_who2 or a query directly on sysprocesses (select * from master..sysprocesses where blocking <> 0).  If blocking is present and consistent, then you'll need to determine whether or not to kill the parent blocking process.  Killing a process will cause the transaction to rollback, so you need to proceed with caution.  Killing the parent blocking process is only a temporary solution, so you'll need to do more thorough analysis to figure out why the blocking was present.  You should look into missing indexes and perhaps consider changing the database's isolation level to READ_COMMITTED_SNAPSHOT. The blog reader mentions that the execution plan shows a clustered index scan when a clustered index seek is normal for the stored procedure.  A clustered index scan might have been chosen either because that is what is in cache already or because of out of date statistics.  The blog reader mentions that bulk imports occur at regular intervals, so outdated statistics is definitely something that could cause this issue.  The blog reader may need to update statistics after imports are done if the imports are changing a lot of data (greater than 10%).  If the statistics are good, then the query optimizer might have chosen to scan rather than seek in a previous execution because the scan was determined to be less costly due to the value of an input parameter.  If this parameter value is rare, then its execution plan in cache is what we call a bad plan.  You want the best plan in cache for the most frequent parameter values.  If a bad plan is a recurring problem on your system, then you should consider rewriting the stored procedure.  You might want to break up the code into multiple stored procedures so that each can have a different execution plan in cache. To remove a bad plan from cache, you can recompile the stored procedure.  An alternative method is to run DBCC FREEPROCACHE which drops the procedure cache.  It is better to recompile stored procedures rather than dropping the procedure cache as dropping the procedure cache affects all plans in cache rather than just the ones that were bad, so there will be a temporary performance penalty until the plans are loaded into cache again. To determine if there is a hardware bottleneck occurring such as slow I/O or high CPU utilization, you will need to run Performance Monitor on the database server.  Hopefully you already have a baseline of the server so you know what is normal and what is not.  Be on the lookout for I/O requests taking longer than 12 milliseconds and CPU utilization over 90%.  The servers that I support typically are under 30% CPU utilization, but your baseline could be higher and be within a normal range. If restarting the SQL Server service fixes the problem, then the problem was most likely due to blocking or a bad plan in the procedure cache.  Rather than restarting the SQL Server service, which causes downtime, the blog reader should instead analyze the above mentioned things.  Proceed with caution when restarting the SQL Server service as all transactions that have not completed will be rolled back at startup.  This crash recovery process could take longer than normal if there was a long-running transaction running when the service was stopped.  Until the crash recovery process is completed on the database, it is unavailable to your applications. If restarting IIS fixes the problem, then the problem might not have been inside SQL Server.  Prior to taking this step, you should do analysis of the above mentioned things. If you can think of other reasons why the blog reader is facing this issue a few times a week, I'd love to hear your thoughts via a blog comment.

    Read the article

  • The SSIS tuning tip that everyone misses

    - by Rob Farley
    I know that everyone misses this, because I’m yet to find someone who doesn’t have a bit of an epiphany when I describe this. When tuning Data Flows in SQL Server Integration Services, people see the Data Flow as moving from the Source to the Destination, passing through a number of transformations. What people don’t consider is the Source, getting the data out of a database. Remember, the source of data for your Data Flow is not your Source Component. It’s wherever the data is, within your database, probably on a disk somewhere. You need to tune your query to optimise it for SSIS, and this is what most people fail to do. I’m not suggesting that people don’t tune their queries – there’s plenty of information out there about making sure that your queries run as fast as possible. But for SSIS, it’s not about how fast your query runs. Let me say that again, but in bolder text: The speed of an SSIS Source is not about how fast your query runs. If your query is used in a Source component for SSIS, the thing that matters is how fast it starts returning data. In particular, those first 10,000 rows to populate that first buffer, ready to pass down the rest of the transformations on its way to the Destination. Let’s look at a very simple query as an example, using the AdventureWorks database: We’re picking the different Weight values out of the Product table, and it’s doing this by scanning the table and doing a Sort. It’s a Distinct Sort, which means that the duplicates are discarded. It'll be no surprise to see that the data produced is sorted. Obvious, I know, but I'm making a comparison to what I'll do later. Before I explain the problem here, let me jump back into the SSIS world... If you’ve investigated how to tune an SSIS flow, then you’ll know that some SSIS Data Flow Transformations are known to be Blocking, some are Partially Blocking, and some are simply Row transformations. Take the SSIS Sort transformation, for example. I’m using a larger data set for this, because my small list of Weights won’t demonstrate it well enough. Seven buffers of data came out of the source, but none of them could be pushed past the Sort operator, just in case the last buffer contained the data that would be sorted into the first buffer. This is a blocking operation. Back in the land of T-SQL, we consider our Distinct Sort operator. It’s also blocking. It won’t let data through until it’s seen all of it. If you weren’t okay with blocking operations in SSIS, why would you be happy with them in an execution plan? The source of your data is not your OLE DB Source. Remember this. The source of your data is the NCIX/CIX/Heap from which it’s being pulled. Picture it like this... the data flowing from the Clustered Index, through the Distinct Sort operator, into the SELECT operator, where a series of SSIS Buffers are populated, flowing (as they get full) down through the SSIS transformations. Alright, I know that I’m taking some liberties here, because the two queries aren’t the same, but consider the visual. The data is flowing from your disk and through your execution plan before it reaches SSIS, so you could easily find that a blocking operation in your plan is just as painful as a blocking operation in your SSIS Data Flow. Luckily, T-SQL gives us a brilliant query hint to help avoid this. OPTION (FAST 10000) This hint means that it will choose a query which will optimise for the first 10,000 rows – the default SSIS buffer size. And the effect can be quite significant. First let’s consider a simple example, then we’ll look at a larger one. Consider our weights. We don’t have 10,000, so I’m going to use OPTION (FAST 1) instead. You’ll notice that the query is more expensive, using a Flow Distinct operator instead of the Distinct Sort. This operator is consuming 84% of the query, instead of the 59% we saw from the Distinct Sort. But the first row could be returned quicker – a Flow Distinct operator is non-blocking. The data here isn’t sorted, of course. It’s in the same order that it came out of the index, just with duplicates removed. As soon as a Flow Distinct sees a value that it hasn’t come across before, it pushes it out to the operator on its left. It still has to maintain the list of what it’s seen so far, but by handling it one row at a time, it can push rows through quicker. Overall, it’s a lot more work than the Distinct Sort, but if the priority is the first few rows, then perhaps that’s exactly what we want. The Query Optimizer seems to do this by optimising the query as if there were only one row coming through: This 1 row estimation is caused by the Query Optimizer imagining the SELECT operation saying “Give me one row” first, and this message being passed all the way along. The request might not make it all the way back to the source, but in my simple example, it does. I hope this simple example has helped you understand the significance of the blocking operator. Now I’m going to show you an example on a much larger data set. This data was fetching about 780,000 rows, and these are the Estimated Plans. The data needed to be Sorted, to support further SSIS operations that needed that. First, without the hint. ...and now with OPTION (FAST 10000): A very different plan, I’m sure you’ll agree. In case you’re curious, those arrows in the top one are 780,000 rows in size. In the second, they’re estimated to be 10,000, although the Actual figures end up being 780,000. The top one definitely runs faster. It finished several times faster than the second one. With the amount of data being considered, these numbers were in minutes. Look at the second one – it’s doing Nested Loops, across 780,000 rows! That’s not generally recommended at all. That’s “Go and make yourself a coffee” time. In this case, it was about six or seven minutes. The faster one finished in about a minute. But in SSIS-land, things are different. The particular data flow that was consuming this data was significant. It was being pumped into a Script Component to process each row based on previous rows, creating about a dozen different flows. The data flow would take roughly ten minutes to run – ten minutes from when the data first appeared. The query that completes faster – chosen by the Query Optimizer with no hints, based on accurate statistics (rather than pretending the numbers are smaller) – would take a minute to start getting the data into SSIS, at which point the ten-minute flow would start, taking eleven minutes to complete. The query that took longer – chosen by the Query Optimizer pretending it only wanted the first 10,000 rows – would take only ten seconds to fill the first buffer. Despite the fact that it might have taken the database another six or seven minutes to get the data out, SSIS didn’t care. Every time it wanted the next buffer of data, it was already available, and the whole process finished in about ten minutes and ten seconds. When debugging SSIS, you run the package, and sit there waiting to see the Debug information start appearing. You look for the numbers on the data flow, and seeing operators going Yellow and Green. Without the hint, I’d sit there for a minute. With the hint, just ten seconds. You can imagine which one I preferred. By adding this hint, it felt like a magic wand had been waved across the query, to make it run several times faster. It wasn’t the case at all – but it felt like it to SSIS.

    Read the article

  • Scan to email messages downloaded by POP connector on SBS 2008 are delivered to badmail folder.

    - by Jon
    I have a Kyocera MFD. I have it set to scan to email using the ISP's SMTP server. The messages are delivered and received by the hosted email server. The POP connector on the SBS server downloads the messages, but delivers them to the badmail folder. I have given the message a subject and a body. Spam score from message header: MIME-Version: 1.0 X-Mailer: NetWorkScanner Mail System Version 1.1 Content-Type: multipart/mixed; boundary="------------V2VkLCAxMCBNYXIgMjAxMCAxMjoxOToxNSArMDAwMA==" X-Spam-Status: No, score=-2.6 X-Spam-Score: -25 X-Spam-Bar: -- X-Spam-Flag: NO --------------V2VkLCAxMCBNYXIgMjAxMCAxMjoxOToxNSArMDAwMA== Content-Type: text/plain; charset="us-ascii"

    Read the article

  • Discussion - Allowing / blocking user access to pages (Client Side Only!) - Javascript / Jquery

    - by Ozaki
    TLDR Using plain HTML / Javascript (Client Side) I want to prevent viewing of certain pages. The user will have to type a username and password and depending on that they get access to different pages. Answers can NOT include server side whatsoever It does not matter if they can break it easily. There is no sensitive information etc. Also the target audience will not have access to internet OR probably know what a cookie is... At some point the user will have to type username / password.(I can define the cookie here) Currently I thought of using cookies to set a cookie for each page to say "true" / "false" but that would get messy with so many cookies. Or setting an array within a cookie for each page? I have div field "#Content" which as it looks encompasses all of my content on the page so blocking out content will be as simple as replacing it with ("sorry you don't have access") etc. For Example: $.cookie("Access","page1, page2, page3"{ expires: 1 }); I am looking for anyway to do this does not have to be with cookies. Would be nice to get a discussion of different ways this can be done. So the question is: What do YOU think would be a good way to go about doing this with client side validation?

    Read the article

< Previous Page | 32 33 34 35 36 37 38 39 40 41 42 43  | Next Page >