RootsWeb.com Mailing Lists
Previous Page      Next Page
Total: 6740/10000
    1. Re: [FreeHelp] inability to submit sitemap due to robots.txt fileproblem
    2. Pat Asher
    3. At 10:39 AM 11/25/2012, chris dale wrote: >How do I get rid of this so called robots.txt file? (if I have one) >to correct the problem and what will be the repercussions to my website? Jill, The robots.txt file that Googlebot is looking for is located in the root of the server, for example. on the Freepages server, it is http://freepages.rootsweb.ancestry.com/robots.txt On the www server, it is blocked, i.e. http://www.rootsweb.ancestry.com/robots.txt If this is a recent change, that may be what is causing the error message for you. Regardless, you, as an account holder, DO NOT have access to the server root. You can not change, add or delete the robots.txt file, and placing one in your account will have no effect. I don't use the Google search engine, so I don't what work arounds others might be using. Pat Asher

    11/25/2012 07:45:59
    1. Re: [FreeHelp] inability to submit sitemap due to robots.txt fileproblem
    2. chris dale
    3. Pat, Regarding the following info you relayed: On the www server, it is blocked, i.e. http://www.rootsweb.ancestry.com/robots.txt If this is a recent change, that may be what is causing the error message for you. Regardless, you, as an account holder, DO NOT have access to the server root.  You can not change, add or delete the robots.txt file, and placing one in your account will have no effect. Ok so if I can't affect the robots.txt file because it is controlled by ancestry.com then what can I do as a website account holder (if anything) to be able to submit my site maps to google again... I have been doing it all along over the past several years up until mid November when this happened.  Does this mean that google will no longer index any more of my pages?  Has this happened to anyone else trying to resubmit a site map on the rootsweb websites in November? Thanks again! Most sincerely Jill

    11/25/2012 05:11:57
    1. Re: [FreeHelp] inability to submit sitemap due to robots.txt fileproblem
    2. chris dale
    3. Thanks Jill for checking out the code on my website.  I deleted that inappropriate header tag and eliminated that meta name google site verification code (I have no idea where that bit came from actually- I didn't deliberately put that in.) Regarding copying that robot.txt file to the computer and deleting it from the server- I'm not sure how to do that one- I had gotten the impression that the robot txt file is on the ancestry.com server and that I couldn't affect change on it if I wanted to- unless I misunderstood.  Could this robot txt issue that is not allowing me to send my site map be an ancestry issue and not my website???  I am just baffled that I don't think I did anything recently to bring this on. I haven't touched my index.html page or the template in a year or two. I read with interest that link about the "Instance Begin Editable Name" that is occurring on my site as well as many others with the template use in Dream weaver CS4.  I haven't touched that template for several years, but it looks like the title tag is appearing in the right place to me on all the pages created by the template? Here is what is on my template page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <!-- TemplateBeginEditable name="doctitle" --> <title>mayo template</title> <!-- TemplateEndEditable --> <link href="../countymayo.css" rel="stylesheet" type="text/css" /> <!-- TemplateBeginEditable name="head" --> <meta name="robots" content="index, follow"> <meta name="description" content="Attymass Townlands, County Mayo Ireland."> <!-- TemplateEndEditable --> Here is what is on my index.html page: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"><!-- InstanceBegin template="/Templates/mayo.dwt" codeOutsideHTMLIsLocked="false" --> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <!-- InstanceBeginEditable name="doctitle" --> <title>County Mayo Beginnings Homepage</title> <!-- InstanceEndEditable --> <link href="countymayo.css" rel="stylesheet" type="text/css" /> <!-- InstanceBeginEditable name="head" --> <meta name="robots" content="index, follow"> <meta name="description" content="County Mayo, County Roscommon Ireland Genealogical Research. Included are maps, geographical divisions, deciphering Thanks for all your help.  I certainly appreciate it.  I just don't get how to change the robot.txt issue to get things reset. Jill

    11/25/2012 05:04:46
    1. Re: [FreeHelp] inability to submit sitemap due to robots.txt fileproblem
    2. chris dale
    3. I just went to webmaster tools - health, clicked the blocked URLS clicked test robots.txt tab (contents listed below already appeared in the box) # Domain:[rootsweb.ancestry.com] # # This file should reside in the root directory ancestry.XX/robots.txt # # Tells Scanning Robots Where They Are And Are Not Welcome # User-agent: can also specify by name; "*" is for all bots # Disallow: disallow if directive matches first part of requested path # User-Agent: * Disallow: /flytrap/flybait  Under URLs Specify the URLs and user-agents to test against - it listed my site. I clicked test and it said: Test results Url Googlebot Googlebot-Mobile http://www.rootsweb.ancestry.com/~irlmayo2/ Allowed Detected as a directory; specific files may have different restrictions Allowed Detected as a directory; specific files may have different restrictions I didn't copy and paste anything that wasn't already in that box.  I have no idea what or where this robots.txt file is they mention (copy content of robots.txt file to first box) Why does the test show allowed and yet I can't submit a sitemap and the googlebots are being blocked I am beyond confused.  Jill

    11/25/2012 01:05:23
    1. [FreeHelp] inability to submit sitemap due to robots.txt file problem
    2. chris dale
    3. I just went to webmaster tools - health, clicked the blocked URLS clicked test robots.txt tab (contents listed below already appeared in the box) # Domain:[rootsweb.ancestry.com] # # This file should reside in the root directory ancestry.XX/robots.txt # # Tells Scanning Robots Where They Are And Are Not Welcome # User-agent: can also specify by name; "*" is for all bots # Disallow: disallow if directive matches first part of requested path # User-Agent: * Disallow: /flytrap/flybait  Under URLs Specify the URLs and user-agents to test against - it listed my site. I clicked test and it said: Test results Url Googlebot Googlebot-Mobile http://www.rootsweb.ancestry.com/~irlmayo2/ Allowed Detected as a directory; specific files may have different restrictions Allowed Detected as a directory; specific files may have different restrictions I didn't copy and paste anything that wasn't already in that box.  I have no idea what or where this robots.txt file is they mention (copy content of robots.txt file to first box) Why does the test show allowed and yet I can't submit a sitemap and the googlebots are being blocked I am beyond confused.  Jill

    11/25/2012 12:59:33
    1. Re: [FreeHelp] inability to submit sitemap due to robots.txt fileproblem
    2. chris dale
    3. Jill (Muir) I am pretty weak when it comes to putting the code into my website. I think I explained that wrong. Someone helped me put <meta name="robots" content="index, follow" /> into my code about a year or 2 ago when I was having trouble getting my pages indexed and everything has gone along just fine until suddenly I am getting this problem with this robots.txt file that google bot was unable to download. I don't understand what I could have done that changed things. Is this the line in my code that is the  problem? (I thought it was just to help get my site indexed)  or is the robots.txt file something else- To my knowledge I haven't changed anything in my code to do with robots.  How do I get rid of this so called robots.txt file? (if I have one) to correct the problem and what will be the repercussions to my website?  Sorry to be so lame on understanding this but it is really over my head. I checked out that link you gave me to robots.txt.org and that has really helpful info...  )  but it has been like that for over a year, getting indexed on a regular basis and no problems and I don't think I changed anything at all...which makes no sense to me... the info on that link makes sense, I am just not sure how to use the info to apply it to my problem (if that makes any sense) Jill

    11/25/2012 12:39:21
    1. [FreeHelp] inability to submit sitemap due to robots.txt file problem
    2. chris dale
    3. Hello again I tried to resubmit my website site map today (I had 1303 pages indexed before this) and it was rejected.  The error message read "couldnt access my site map- network unreachable, robots.txt unreachable.  Found a robots.txt file at the root of your site but were unable to download it.  Please ensure that it is accessible or remove it completely.  This robots.txt issue is driving me insane.  I have quite a few pictures/graphic images on my website so I have this in my template code for my website (should it only be on the pages with pictures or graphic images?) It has been like that for over a year on all of my pages. but I don't think I have changed or done anything different since googlebot starting having trouble accessing my site Nov 14 because of this robots.txt problem. <meta name="robots" content="index, follow" /> <meta name="Description" content="County Mayo, Ireland Major Market Towns Map." /> <meta name="Keywords" content="Mayo, Ireland, Market, Towns, Map" /> I want to be able to continue to resubmit my site maps and have them indexed. Do I need to remove the <meta name="robots" content="index, follow" />  or ? to clear up this problem or is there something else you would suggest that I do.  I really haven't had time to do much work on my website since mid Nov when this all started so I have no idea where the problem stems from Any suggestions would be appreciated. Jill

    11/24/2012 11:21:17
    1. Re: [FreeHelp] Google unable to access the robots text file
    2. Barry Carlson
    3. OnWed, 21 Nov 2012 05:40:18 -0800 (PST) , chris dale wrote: > Thanks for the great suggestion on putting the absent > page back on with a redirect! I worked through the broken links noted > on the xenu link sleuth (a great program by the way)- thanks! and > believe I have corrected all of the broken links but I am still getting > the unreachable robots.txt message on my webmaster tools. Apparently the > problem began on November 14 and 15. I am trying to figure out how I > can access the robots.txt in logs for that day to try to analyze and > correct the problem. Any suggestions. Again thanks for everyones time! > > :)Jill Jill, The robots.txt file you are referring to is located in the root directory at:- http://www.rootsweb.ancestry.com ... and is configured by Ancestry staff. Your website is effectively a sub-domain, i.e. http://www.rootsweb.ancestry.com/~irlmayo2/ ... and its root directory is /~irlmayo2 , but the only robot.txt file that will be honoured by a robot is that which you do not have access to. So in short, your Webmaster Tools is reporting on something you have no control over.

    11/22/2012 12:46:40
    1. Re: [FreeHelp] Google unable to access the robots text file
    2. chris dale
    3. Thanks Judy, Barry and Ron for all the great information and suggestions.  I haven't had a problem submitting my site maps and getting the pages indexed and haven't added many pages in the past few weeks. I think I got all my links squared away.  That google message had me spinning my wheels for hours this morning looking for a ghost! I really do learn a lot from all of your great contributions! Thanks so Much! Jill

    11/21/2012 10:29:31
    1. [FreeHelp] ATT RANDY: Speaking of the robots.txt.... (for pawashin GenWeb site)
    2. JFlorian
    3. Randy, Jill Dale posted that her freepages site had new unexplained 404s. I went to webmaster tools to check my sites to compare. All my "freepages" sites show 100% indexing, no errors. But for http://www.rootsweb.ancestry.com/~pawashin/ Webmaster Tools shows on September 21, 2012 Google wrote me that there was a huge jump from less than 8 to 10% (estimated) error rate to a complete 100% errors on 11/16 and dropping to and hovering now at 55% errors on 11/19. They've been re-tying almost daily in November, according to the graph. The email on Sept 21st said they " see a significant increase in errors while crawling your site." The problem seems to have started around or before Aug 26th 2012 -- that's as far back as the graph shows. NOTE: ALL reported pages were uploaded well over two years ago and never had any problems with indexing. NO changes had been made to anything on the site. About half are township map pages but the other half are text pages. Did the server have an issue this summer? I re-uploaded the entire site just now. I want to make sure things are ok on the server side before I start marking items as fixed or ask Google to crawl again. Judy -- WASHINGTON COUNTY PA WEBSITES::: http://freepages.misc.rootsweb.com/~florian/ http://freepages.school-alumni.rootsweb.com/~florian/the-rockdoctor/ Coordinator of the Washington County PAGenWeb: http://www.rootsweb.com/~pawashin/

    11/21/2012 09:55:02
    1. Re: [FreeHelp] Google unable to access the robots text file
    2. JFlorian
    3. Jill, I'm almost positive that's the same message I got before from Google Webmaster Tools, but like many things, Google doesn't make it's own words very clear. e.g. they write "Help" words that are not much help. The meaning is NOT that "THIS is YOUR SITE problem" but an IF... Google uses a kind of "if so-- then this" writing. It would be clearer if Google wrote this, "You have some 404s on site right now. ONE POSSIBLE cause on websites is not having a robots.txt file OR not it ain't working. BUT, we at Google REALLY aren't sure this is YOUR site's problem so we will give you this general information (begin general info): IF your site has content you don't want Google or other search engines to access, use a robots.txt file to specify how search engines should crawl your site's content. [But we don't know IF this applies to YOU or not.] [But because most people have websites at a top level we will tell you this next thing...-->)Check to see that your robots.txt is working as expected. (Any changes you make to the robots.txt content below will not be saved.) Now we see YOUR site is not located at the top level for the domain. SO you should know that.... A robots.txt file is only valid when located in the highest-level directory and applies to all directories within the domain. The robots.txt file that applies to your site (if one exists) is located at http://www.rootsweb.ancestry.com/robots.txt. This page provides information on that file. robots.txt file. AND NOW that we've told you this, ignore it because rootsweb takes care of that for you... haha April Fool's. What it should also say is: 1. Check LINKS. The bad link is NOT the page we list below -- another April Fools, just because Google likes joking. We want you to guess and learn "by guessing" that the problem is a bad link on another page. Which one? We aren't gonna tell you--haha! 2. Know that We love exasperating people in general, so when you can't find the problem, we will only say 'look again', because when you find the problem, you'll see what Google's BOT "sees". I'm trying to joke, but really, Google needs to re-write their help statements. I can only imagine how many people get angry at RW not having its robots file working when that isn't the real problem. You asked about your index follow tag. Here's mine for comparison, with bracket at beginning and end meta name="robots" content="index,follow" Also, I thought RW does not allow redirects? Judy

    11/21/2012 08:36:07
    1. Re: [FreeHelp] Google unable to access the robots text file
    2. Barry Carlson
    3. Jill, I haven't got time right now to check much further, but it appears to me that the error: 12157 codes are generally related to FamilySearch links. I note that those links have changed from http:// to https:// which is why they are being indicated. Barry On 21/11/2012 8:31 a.m., chris dale wrote: > Thanks for the suggestions. I hate to sound like a mental midget. That Xenu Link Sleuth Program does an amazing compilation if I can figure out what to do with the 404 and 12157 error codes they list. > > > When I go to search on my computer. "find inside the file doesn't come up"- just search all files and folders. I tried to search in my specific website file, chose more advanced option and search sub file folders and 1281 files come up with "This". I set dreamweaver up to highlight any invalid code for me and I can't find any that is highlighted, and my sitewide check of links shows no broken links, but I can clearly see from the Xenu Link Sleuth program that a significant number of my external links are no longer working- which is huge to know. I am not familiar with the 12157 error code and what to do with them. I need to pour over this as it is all a bit over my head. I don't know how to do a search through in my website page code to find specific possible errors like nbsp? What exactly does that mean- do I have inappropriate spaces, brackets or parent tags in my code or ? Sorry to be so elementary but this sounds like Greek to me :) > > What do you do when you create a page, save it, it gets downloaded to the remote server and then you realize you made a typo in the "saved title", the page title gets changed or you delete the page entirely. If the page has already been crawled and indexed then a 404 will come up correct? What should one do to correct this? > Thanks for the great information and feedback- I really appreciate the help! I could never have gotten my website off the ground without the help of this group > > :) Jill >

    11/21/2012 01:59:29
    1. Re: [FreeHelp] Google unable to access the robots text file
    2. Ron Lankshear
    3. On 21/11/2012 6:31 AM, chris dale wrote: > If the page has already been crawled and indexed then a 404 will come up correct? What should one do to correct this? It is not possible on Freepages to have special code for 404. So if I find pages I wanted to delete come up in a valid search for surname etc etc which means cousins might find and then not know what the 404 means. Well then I put the page back and code a redirect Such as <http://freepages.genealogy.rootsweb.ancestry.com/~lankshear/Eagle/Grimmett/index.htm> I make the page as nebulous as possible - no keywords - no words from the original page - there is only the page name really and my redirect details... Ron Lankshear -Sydney NSW (from London-Shepherds Bush/Chiswick) try my links http://freepages.rootsweb.ancestry.com/~lankshear/

    11/21/2012 01:31:31
    1. Re: [FreeHelp] Google unable to access the robots text file
    2. Barry Carlson
    3. On 21/11/2012 2:36 a.m., chris dale wrote: > Hello > All of a sudden on my website County Mayo Beginnings, I am getting this message on my GoogleWebmaster Tools: > November 14, 2012 > http://www.rootsweb.ancestry.com/~irlmayo2/: Googlebot can't access your site > > There are 25 URL errors listed - (29 starting November 14 according to Google Webmaster tools...some of which look like this: > > 20 corcorans-claremorris 404 10/28/12 > 21 dunbrody 404 10/15/12 > 22 irish_migration_eng.. 404 10/21/12 > 23 brennans_carracastl.. 404 10/15/12 > 24 ballina 404 9/26/12 > 25 *%C2%A0%C2%A0This 404 9/27/12 > > Some of the pages I am able to access from my website just fine, some were typos that I changed the page name and didn't know how to redirect the page and some I have no idea where they are from like #25 (if that makes sense) > I went through all 25 errors that were listed and the pages that are actually on my website seem to work. Webmaster said the most likely explanation is that my site is overloaded? I am totally confused. Any suggestions what I should do? This support group helped me, an absolute novice get this website off the ground and I was most appreciative for all your help. Now I seemed to need some guidance again as I have hit a roadblock. > :) Jill > Jill, When you see %20 in an URL, the 20 is the Hex value for a "space" and the % is the delimiter character. In the case of %C2%A0, the C2 A0 represent a "non blanking space", i.e. &nbsp; in HTML. I ran the Xenu Link Sleuth utility on your site, and the complete report can be found at:- http://freepages.rootsweb.com/~bristowe/test/irlmayo2-check-links.html You will find a link to the associated software at the top of the above page. If you download the program and install it, you will have the ability to let it look at any broken links through FTP access to your site. Barry

    11/21/2012 12:39:53
    1. Re: [FreeHelp] Google unable to access the robots text file
    2. chris dale
    3. Here is what it says about google bot being blocked on my website http://www.rootsweb.ancestry.com/~irlmayo2/: Does this yield any clues? Sorry I seem so clueless but this seems like Greek to me I have always had <meta name="robots" content="index, follow" /> on my web pages to keep graphics files and other docs from  being indexed- I don't think I changed anything in recent months. Jill Blocked URLs If your site has content you don't want Google or other search engines to access, use a robots.txt file to specify how search engines should crawl your site's content. Check to see that your robots.txt is working as expected. (Any changes you make to the robots.txt content below will not be saved.) This site is not located at the top level for the domain. A robots.txt file is only valid when located in the highest-level directory and applies to all directories within the domain. The robots.txt file that applies to your site (if one exists) is located at http://www.rootsweb.ancestry.com/robots.txt. This page provides information on that file. robots.txt file Blocked URLs Downloaded Status http://www.rootsweb.ancestry.com/robots.txt 0 Nov 14, 2012 200 (Success)

    11/20/2012 10:58:48
    1. Re: [FreeHelp] Google unable to access the robots text file
    2. chris dale
    3. Thanks for the great suggestion on putting the absent page back on with a redirect! I worked through the broken links noted on the xenu link sleuth (a great program by the way)- thanks!  and believe I have corrected all of the broken links but I am still getting the unreachable robots.txt message on my webmaster tools. Apparently the problem began on November 14 and 15.  I am trying to figure out how I can access the robots.txt in logs for that day to try to analyze and correct the problem.  Any suggestions.  Again thanks for everyones time! :)Jill 

    11/20/2012 10:40:18
    1. [FreeHelp] Google unable to access the robots text file
    2. chris dale
    3. Thanks for the suggestions.  I hate to sound like a mental midget.  That Xenu Link Sleuth Program does an amazing  compilation if I can figure out what to do with the 404 and 12157 error codes they list.  When I go to search on my computer. "find inside the file doesn't come up"- just search all files and folders.  I tried to search in my specific website file, chose more advanced option and search sub file folders and 1281 files come up with "This".  I set dreamweaver up to highlight any invalid code for me and I can't find any that is highlighted, and my sitewide check of links shows no broken links, but I can clearly see from the Xenu Link Sleuth program that a significant number  of my external links are no longer working- which is huge to know.  I  am not familiar with the 12157 error code and what to do with them. I need to pour over this as it is all  a bit over my head.  I don't know how to do a search through in my website page code to find specific possible errors like nbsp? What exactly does that mean- do I have inappropriate spaces, brackets or parent tags in my code or ?  Sorry to be so elementary but this sounds like Greek to me :) What do you do when you create a page, save it, it gets downloaded to the remote server and then you realize you made a typo in the "saved title", the page title gets changed or you delete the page entirely.  If the page has already been crawled and indexed then a 404 will come up correct?  What should one do to correct this? Thanks for the great information and feedback- I really appreciate the help!  I could never have gotten my  website off the ground without the help of this group :) Jill

    11/20/2012 04:31:55
    1. Re: [FreeHelp] Google unable to access the robots text file
    2. JFlorian
    3. Jill, On your computer, go to Search. Choose "Find inside the File" (2nd box) and copy in each name for the 404s. This will show you which page(s) in your website on your harddrive that those LINKS appear. Check each link for accurate pathways. Percent % is how Browsers read "a space". I think *%C2%A0%C2%A0This might be the unicode?? When I google *%C2%A0%C2%A0This I can see other sites with %C2%A0%C2%A0The or other word besides The, This, That (whatever first word of the sentence). Therefore, you need to search your pages for EVERY This with capital T. Before one of the This entries is that code or spaces that are rendered as code. I did a site search with period 2 spaces This on your site. Those are pages you should check IN the coding. ". This" site:http://www.rootsweb.ancestry.com/~irlmayo2/ I also looked for period* 2 spaces This -- trying to get the * to show up, but the * didn't seem to matter much. If you can find which pages have This (capital letter) at the beginning of a sentence (with period space or two), then you'll find the unicode. Search in Google for *%C2%A0%C2%A0This and you'll see what I mean. A 404 MAY mean you are NOT just looking for "that page"--- you are looking for a LINK on one page or more that is directed wrongly to the correct page. Tracing 404s is frustrating. But I find Google "sees" what I overlook. So it's just a matter of sleuthing until you figure out just what Google is seeing. Judy On Tue, Nov 20, 2012 at 8:36 AM, chris dale <kitingdale@yahoo.com> wrote: > Hello > All of a sudden on my website County Mayo Beginnings, I am getting this > message on my GoogleWebmaster Tools: > November 14, 2012 > http://www.rootsweb.ancestry.com/~irlmayo2/: Googlebot can't access your > site > > There are 25 URL errors listed - (29 starting November 14 according to > Google Webmaster tools...some of which look like this: > > 20 corcorans-claremorris 404 10/28/12 > 21 dunbrody 404 10/15/12 > 22 irish_migration_eng.. 404 10/21/12 > 23 brennans_carracastl.. 404 10/15/12 > 24 ballina 404 9/26/12 > 25 *%C2%A0%C2%A0This 404 9/27/12 > > Some of the pages I am able to access from my website just fine, some were > typos that I changed the page name and didn't know how to redirect the page > and some I have no idea where they are from like #25 (if that makes sense) > I went through all 25 errors that were listed and the pages that are > actually on my website seem to work. Webmaster said the most likely > explanation is that my site is overloaded? I am totally confused. Any > suggestions what I should do? This support group helped me, an absolute > novice get this website off the ground and I was most appreciative for all > your help. Now I seemed to need some guidance again as I have hit a > roadblock. > :) Jill >

    11/20/2012 03:37:57
    1. [FreeHelp] Google unable to access the robots text file
    2. chris dale
    3. Hello All of a sudden on my website County Mayo Beginnings,  I am getting this message on my GoogleWebmaster Tools:  November 14, 2012 http://www.rootsweb.ancestry.com/~irlmayo2/: Googlebot can't access your site   There are 25 URL errors listed - (29 starting November 14 according to Google Webmaster tools...some of which look like this:    20 corcorans-claremorris 404 10/28/12 21 dunbrody 404 10/15/12 22 irish_migration_eng.. 404 10/21/12 23 brennans_carracastl.. 404 10/15/12 24 ballina 404 9/26/12 25 *%C2%A0%C2%A0This 404 9/27/12   Some of the pages I am able to access from my website just fine, some were typos that I changed the page name and didn't know how to redirect the page and some I have no idea where they are from like #25 (if that makes sense) I went through all 25 errors  that were listed and the pages that are actually on my website seem to work.  Webmaster said the most likely explanation is that my site is overloaded?  I am totally confused.  Any suggestions what I should do? This support group helped me, an absolute novice get this website off the ground and I was most appreciative for all your help.  Now I seemed to need some guidance again as I have hit a roadblock. :) Jill

    11/19/2012 10:36:24
    1. [FreeHelp] Adobe Edge Web Fonts - free to use.
    2. Barry Carlson
    3. I recently had occasion to use the Adobe Edge Web Fonts free service, and found it remarkably easy to set up and use. http://www.edgefonts.com/ There is no messing around with downloading font-kits, unzipping and placing into the appropriate folder/directory. These fonts are downloaded into the head of your page with one short line of Javascript, and your CSS is set-up to make use of them. It is best to ensure that you have a default font also set (which will get used during the time it takes the primary font to download - half a second or so), just in case the user's Javascript is disabled. The above web page gives a detailed description of how to use the fonts and the format to be used in calling them. I decided to replace the Tahoma and 'Courier New' fonts that I was using in:- http://freepages.rootsweb.com/~bristowe/fade-images.html ... with Source Sans Pro, and a mono-spaced font, Source Code Pro. The download script is:- <!--code> <script src="http://use.edgefonts.net/source-sans-pro:n4,i4; source-code-pro:n4.js"></script> <code--> ... which I've placed (in this message) within commented out code tags to prevent the src from calling the url. The CSS is in the head of the page, and I was able to increase the font-sizes for both fonts to give a more readable page. Here is a little bit of Firefox only CSS which employs an experimental property called 'text-rendering', and I have also used it in the page. @-moz-document url-prefix() { body { text-rendering:geometricPrecision; /*text-rendering:optimizeLegibility;*/ } } Barry

    11/18/2012 10:35:29